Conference PaperPDF Available

LPQ and LDP Descriptors with ML Representation For Kinship verification

Authors:

Abstract and Figures

The automatic verification of kinship is a challenging problem that recently attracted much interest in computer vision, the kinship verification has become an active research field due to its potential applications such as organizing photo albums and images annotation, recognizing resemblances among humans and finding of missing children. In this paper, we propose an approach which takes two images as an input then give kinship result (kinship / non-kinship) as an output.This approach based on the Local Phase Quantization (LPQ) and Local directional pattern (LDP) features descriptors and the ML (Multi-Level) representation for the kinship verification from facial images, this work consists six stages which are : (i) face preprocessing, (ii) features extraction, (iii) face representation (iv) pair features representation and normalization, (v) features selection and (vi) kinship verification. Experiments are conducted on four public databases (Cornell KinFace, UB Kin database, KinFace-I, and KinFace-II). The obtained results are good compared with state-of-the-art approaches.
Content may be subject to copyright.
LPQ and LDP Descriptors with ML Representation For
Kinship verification
Abdelhakim Chergui1, Salim Ouchtati1, Hichem Telli2, Fares Bougourzi3, and Salah
Eddine Bekhouche4
1Laboratory of LRES, University of Skikda, Algeria.
2Laboratory of LESIA, University of Biskra, Algeria.
3Laboratory of LTII, University of Bejaia, Algeria.
4Department of Electrical Engineering, University of Djelfa, Algeria.
Abstract. The automatic verification of kinship is a challenging problem that
recently attracted much interest in computer vision, the kinship verification has
become an active research field due to its potential applications such as organizing
photo albums and images annotation, recognizing resemblances among humans
and finding of missing children. In this paper, we propose an approach which
takes two images as an input then give kinship result (kinship / non-kinship) as
an output.This approach based on the Local Phase Quantization (LPQ) and Local
directional pattern (LDP) features descriptors and the ML (Multi-Level) represen-
tation for the kinship verification from facial images, this work consists six stages
which are : (i) face preprocessing, (ii) features extraction, (iii) face representation
(iv) pair features representation and normalization, (v) features selection and (vi)
kinship verification. Experiments are conducted on four public databases (Cor-
nell KinFace, UB Kin database, KinFace-I, and KinFace-II). The obtained results
are good compared with state-of-the-art approaches.
Keywords: Kinship verification, LPQ, LDP, ML.
1 Introduction:
Over the past two decades, a large number of face analysis problems have been investi-
gated in the computer vision and pattern recognition community. Facial images convey
many important human characteristics, such as identity, gender, expression, age, eth-
nicity and so on. Kinship verification from facial images is an interesting and challeng-
ing problem, Indeed, there are several types of kinship relationships: father-daughter
relationship (F-D), mother-son (M-S), father-son (F-S) and mother-daughter (M-D).
Nowadays, the recognition of these familial relationships has become an active area
of research and it has much application such as organizing photo albums and images
annotation, recognizing resemblances among humans and finding of missing.
There are many studies have been conducted on kinship verification from facial
images which can be categorized based on the type of feature extraction and the sim-
ilarity algorithms. Fang et al. [5] proposed a system for kinship verification based on
PSM (Pictorial structure model) feature extraction and selection methods and they used
KNN for the classification phase, they obtained a promising result on the Cornell Kin-
Face database. Xia et al. [13] used another database named UB KinFace which contains
the images of the child, young parent and old parent faces, using an extended transfer
subspace learning method to mitigate the enormous divergence of distributions between
children and old parents, and an intermediate distribution was used to close to bridge
and reduce the divergence between the sources distributions.
Another interesting work was proposed by Shao et al. [10] where they used the
version 2 of UB KinFace database to verify the kinship based on robust local Gabor fil-
ters to extract genetic-invariant features. In other words, a metric and transfer subspace
learning were adopted to abridge the discrepancy between children and their old par-
ents. Lu et al. [8] proposed a neighborhood repulsed metric learning (NRML) method
for kinship verification. In addition, they proposed a multi view NRML (MNRML)
method to seek a common metric distance in order to better use of the multiple descrip-
tor features, they applied their method on The KinFaceW-I and KinFaceW-II datasets.
Yan et al. [15] proposed a discriminative multi metric learning method for kin-
ship verification. First, they extracted multiple features using different face descriptors,
then, they jointly learned multiple distance metrics with these multiple extracted fea-
tures under which the probability of a pair of face images where the kinship relation
having a smaller distance than the pair that has not a kinship relation. In this work,
they applied their method on two databases: Cornell KinFace and UB Kin database.
Yan et al. [16] proposed a new prototype-based discriminative feature learning (PDFL)
method for kinship verification, this method aims to learn discriminative mid-level fea-
tures where they constructed a set of face samples with unlabeled kinship relation from
a wild dataset which considered as the reference set. Then, each sample in the training
face kinship dataset is represented as a mid-level feature vector, where each entry is the
corresponding decision value from one SVM, they applied their method on both Cornell
KinFace and UB Kin databases.
Wang et al. [12] proposed a deep kinship verification (DKV) model by integrating
excellent deep learning architecture into metric learning. They employed a deep learn-
ing model which was followed by a metric learning formulation to select nonlinear
features, which can find the appropriate project space to ensure that the margin between
the negative sample pairs (i.e. parent and child without kinship relation) and the pos-
itive sample pairs is larger as possible, they applied their method on The KinFaceW-I
and KinFaceW-II datasets. Zhou et al. [17] proposed an of ensemble similarity learning
(ESL), first they introduced sparse bilinear similarity function to model the relative of
the encoded properties in kin data. The similarity function parameterized by a diago-
nal matrix enjoys the superiority in computational efficiency, making it more practical
for real-world high-dimensional kinship verification applications. Yan [14] proposed
a neighborhood repulsed correlation metric learning (NRCML) method by using the
correlation similarity measure where the kin relation of facial images can be better
highlighted.
The rest of the paper is organized as follows: Our method is introduced in section 2.
Then, the experimental results are presented to demonstrate the efficacy of our proposed
methods in section 3. Finally, we conclude our work in section 4.
2 Proposed method
The kinship verification is the operation of using two persons faces to find if there
is a familial relationship between them. Our proposed method consists of six stages
which are : (i) face preprocessing, (ii) features extraction, (iii) face representation, (iv)
pair features representation and normalization, (v) features selection and (vi) kinship
verification. Fig. 1 illustrates the general structure of the proposed framework.
Fig. 1: General structure of the proposed Method.
2.1 Face preprocessing
In the face preprocessing, we applied the Haar cascade object detector that uses the
Viola-Jones algorithm [11] in order to detect the face region, then we detected the face
landmarks using Ensemble of Regression Trees (ERT) algorithm [7]. The locations of
the two eyes are used to rectify the face 2D pose by applying a 2D similarity transform
on the original face image [2]. Like in [3], we set the parameters kside = 0 : 5,
ktop = 1 and kbottom = 1 : 75 to crop the face region of interest (ROI).
2.2 Features extraction
In this stage , we extracted the features by using two different texture descriptors (LDP
and LPQ) , and for the face representation we used the ML for increased number of
features
Local Directional Pattern (LDP) : is an eight-bit binary code assigned to each pixel
of an input gray scale image. The pattern is calculated by comparing the relative edge
response value of a pixel in different directions. The eight directional edge response val-
ues of a particular pixel are calculated using Kirsch masks in eight different orientations
(M0M7)centered on its own position [6]. These masks are shown in Fig.2.
33 5
305
33 5
M0()
3 5 5
3 0 5
333
M1(-)
5 5 5
3 0 3
333
M2()
5 5 3
5 0 3
333
M3(.)
533
5 0 3
533
M4()
333
5 0 3
5 5 3
M5(&)
333
3 0 3
5 5 5
M6()
333
3 0 5
3 5 5
M7(%)
Fig. 2: Eight directions Kirsch edge masks
By applying eight masks, eight edge response values will be obtained m0,m1, ...,
m7, each one represents the edge significance in its respective direction. The response
values are not equally important in all directions. In order to generate the LDP code-
words, a kvalue must be given. Then, the top kvalues of |mj|are set to 1, and the
rest 8kvalues of |mj|are set to 0. LDP code for each pixel is calculated using the
formulas below:
LDPk=
7
X
i=0
bi(mimk)·2i(1)
bi(a) = 1if a 0
0otherwise. (2)
where mkis the kth most significant directional response. After computing the
LDP code for each pixel (r, c), the histogram Hof the image Iis represented using this
equation:
H(τ) =
M
X
r=1
N
X
c=1
f(LDPk(r, c), τ )(3)
where τis the ldp code value. The number of ldp histogram bins is calculated as
follow:
Nbins =8!
k!·(8 k)! (4)
Fig. 3: Image conversion to LDP and LPQ
Local Phase Quantization (LPQ) : A texture descriptor called LPQ was proposed in
[9]. It is based on the application of STFT. The advantage in STFT is that the phase
of the low frequency coefficients is insensitive to centrally symmetric blur. The spatial
blurring is represented by a convolution between the image intensity and a PSF. The
LPQ descriptor uses the local phase information extracted by the 2-D DFT or, more
precisely, a STFT computed over a rectangular Mby Mneighborhood Nxat each
pixel position xof the image f(x)defined by this formula:
F(u, x) = X
yNx
f(xy)ej2πuTy=wT
ufx(5)
where wuis the basis vector of the 2-D DFT at frequency u, and fxis another vector
containing all M2image samples from Nx.
The local Fourier coefficients are computed at four frequency points u1= [a, 0]T,
u2= [0, a]T,u3= [a, a]T, and u4= [a, a]T, where ais a scalar frequency below the
first zero crossing of H(u)that satisfies the condition H(ui)>0. So a vector obtained
for each pixel, will be built like in this formula:
Fx= [F(u1, x), F (u2, x), F (u3, x), F (u4, x)] (6)
The phase information in the Fourier coefficients is recorded by observing the signs
of the real and imaginary parts of each component in F(x). This is done by using a
simple scalar quantization which presented in this formula:
qj=1if gj0
0otherwise. (7)
where gjis the j th component of the vector G(x)=[Re{F(x)}, Im{F(x)}]. The
resulting eight binary coefficients qjrepresent the binary code pattern. This code will be
converted to decimal number between 0-255. From that, the LPQ histogram will have
256 bins [4].
2.3 Face representations ML (Multi Level) :
The most common face representation in computer vision is a regular grid of fixed size
regions which called MB representation. MB face representation divides the image into
n2blocks where nis the intended level of MB. Fig. 4 shows the feature extraction
procedure using LPQ descriptor with ML representation, level 4 [1].
Fig. 4: Exemple: Multi-Level Local Phase Quantization level 4
Recently, a similar representation called ML representation used in the age esti-
mation and gender classification topics. ML face representation is a spatial pyramid
representation which constructed by sorted series of ML representations. The ML face
representation level n is constructed from level 1, 2 , ... n ML face representations. Fig.
4 illustrates the ML face representations.
2.4 Pair features representation and normalization:
After extracting the features, we normalized the features of each pair (child / parent)
using the formula given below:
Fnorm =F
qPN
j=1 F(j)
(8)
Then this two feature vectors (child / parent) are presented as one feature vector
using this formula:
F=|Fchild Fparent|(9)
where F,Fchild,Fparent are the new feature vector, the feature vector of the child and
the feature vector of the parent respectively.
2.5 Features selection:
For the feature selection, we used a linear discriminant approach based on Fisher’s
score, which quantifies the discriminating power of features. This score is given by:
Wi=Nk(mk¯m)2+Nn(mn¯m)2
Nk2
k+Nn2
n
(10)
where Wiis the weight of feature i,¯mis the feature mean, NXis the number of
samples in the kinship class (k kin / n non-kin), mXand σ2
Xare the mean and the
variance of the kinship class in the intended feature. The features are sorted according
to their weight.
2.6 Kinship verification:
Support vector machines (SVM) constructs a hyperplane or set of hyperplanes in a
high- dimensional space, which can be used for classification, regression. Intuitively, a
good separation is achieved by the hyperplane that has the largest distance to the nearest
training-data point of any class (so-called functional margin), since in general the larger
the margin the lower the generalization error of the classifier. We used the binary SVM
to train and test our proposed approach; the two binary classes are either there is a
kinship relationship or not which are represented by 1 and 0 respectively.
3 Experiments
To evaluate the performance of the proposed method, we used four publicly available
databases (Cornell KinFace, UB Kin database, KinFace-I, and KinFace-II).
3.1 Experimental Settings:
The Cornell KinFace database was created by Fang et al. [5]. It consists of 286 images
and 143 positive pairs. The pairs are distributed as follows: 67 F-S (Father-Son), 32 F-D
(Father-Daughter), 18 M-S (Mother-Son), and 26 M-D (Mother-Daughter).
The UB KinFace database was created by Shao et al. [10], and it has two versions
(Ver1.0 and Ver2.0). The Ver2.0 contains 600 images and 400 positive pairs. Those
pairs are a composition of 180 F-S, 159 F-D, 22 M-S, and 39 M-D.
Lu et al. [8] provided the researchers with two databases called: KinFaceW-I and
KinFaceW-II. The KinFaceW-I contains 1066 images and 533 positive pairs with the
following distribution : 156 F-S, 134 F-D, 116 M-S, and 127 M-D. On the other hand,
KinFaceW-I has 2000 images and 1000 positive pairs. It has a balanced pairs distribu-
tion with 250 pairs for each kin relationship.
The negative pairs which used in these experiments are selected randomly taking
into consideration the distribution of the relationships. Also, the 5 folds are selected
randomly take into account the distribution of the relationships.
3.2 Experimental Results:
The experimental results on the used databases are summarized on Fig. 5a, which shows
that the accuracy proportionally related with the increase of the number of the selected
until an optimal value, then the accuracy decreases and finally stabilizes. The expla-
nation of these transitions is: in the first hand the selected features are very few and
each time we add more selected features the accuracy becomes better until the optimal
features number. The phenomena of the decreasing accuracy after the optimal features
number is because of the adding less relevant features decreases the accuracy. From
other hand, the use of ML-LPQ outperforms the use of ML-LDP with the varying of
the number of selected features for all of the used databases. The difference between
the two descriptors is very huge for both UB and Cornell databases, which is about 20
% and 33 % for UB, and Cornell databases ,respectively.
The Figure (5b) shows the different training speed for each database using the two
feature extraction methods (ML-LPQ and ML-LDP). We can notice two notes from
the results; first, the training time increases with the number of the selected features.
Secondly, the databases that contains more samples takes more time in the training
phase.
0 1000 2000 3000 4000 5000
Number of selected features
45
50
55
60
65
70
75
80
85
Overall Accuarcy (%)
Cornell (LDP)
Cornell (LPQ)
UB Kin (LDP)
UB Kin (LPQ)
KinFace-I (LDP)
KinFace-I (LPQ)
KinFace-II (LDP)
KinFace-II (LPQ)
(a) Accuracy as a function of different selected
features number.
0 1000 2000 3000 4000 5000
Number of selected features
0
10
20
30
40
50
60
70
Testing Time (ms)
Cornell (LDP)
Cornell (LPQ)
UB Kin (LDP)
UB Kin (LPQ)
KinFace-I (LDP)
KinFace-I (LPQ)
KinFace-II (LDP)
KinFace-II (LPQ)
(b) CPU time (in seconds) of the training
phase as a function of different features ratios.
Fig. 5: Accuracy and CPU time results
Table 1: A comparison of the proposed approach with other kinship verification ap-
proaches
Year Approach Databases
Cornell KinFace UB KinFace KinFace W-I KinFace W-II
2010 PSM[5] 70.67 % - -
2011 TL[13] - 60.00 % - -
TSL[10] - 69.67 % - -
2014
PDFL [16] 71.90 % 67.30 % - -
DML [15] 73.50 % 74.50 % - -
MNRML [8] - - 69.90 % 76.5 %
2015 DKV [12] - - 66.90 % 69.50 %
2016 ESL [17] - - 74.10 % 74.30 %
2017 NRCML [14] - - 65.80 % 65.80 %
2018 Proposed
LPQ ML 82.86 % 73.25 % 75.98 % 77.20 %
The comparisons of our proposed approach with the state of art methods are summa-
rized on Table (1). From that Table we observe that our approach (ML-LPQ) performs
better than the state of the art methods for the Cornell, Kinface W-I, and Kinface W-II
databases. However, for the UB database, our proposed approach has the second best
accuracy, and the difference with the best method is very small.
The Fig. (6) is an example of testing our method to verify the kinship between the
persons on the picture.
Fig. 6: Example of kinship verification application
4 Conclusion:
In this paper, we described a novel approach for kinship verification based on the de-
scriptors LDP and LPQ with ML representation. The experimental results showed that
our approach provides a better performance than previous approaches. As a future work,
we propose to use of other descriptors with PML representation. Also, we envision the
use of other pair feature representations as well performing different scenarios of ex-
periments such as cross-database experiments.
References
1. S. E. Bekhouche, A. Ouafi, A. Benlamoudi, A. Taleb-Ahmed, and A. Hadid. Facial age
estimation and gender classification using multi level local phase quantization. In 2015 3rd
International Conference on Control, Engineering Information Technology (CEIT), pages
1–4, May 2015.
2. SE. Bekhouche. Facial Soft Biometrics: Extracting demographic traits. PhD thesis, Facult ´
e
des sciences et technologies, 2017.
3. SE. Bekhouche, A. Ouafi, F. Dornaika, A. Taleb-Ahmed, and A. Hadid. Pyramid multi-level
features for facial demographic estimation. Expert Systems with Applications, 80:297–310,
2017.
4. F. Bougourzi, SE. Bekhouche, ME Zighem, A. Benlamoudi, A. Ouafi, and Taleb-Ahmed
A. A comparative study on textures descriptors in facial gender classification. In 10 me
Confrence sur le Gnie Electrique, Apr 2017.
5. Ruogu Fang, Kevin D Tang, Noah Snavely, and Tsuhan Chen. Towards computational mod-
els of kinship verification. In Image Processing (ICIP), 2010 17th IEEE International Con-
ference on, pages 1577–1580. IEEE, 2010.
6. Taskeed Jabid, Md Hasanul Kabir, and Oksam Chae. Local directional pattern (ldp) for face
recognition. In Consumer Electronics (ICCE), 2010 Digest of Technical Papers International
Conference on, pages 329–330. IEEE, 2010.
7. Vahid Kazemi and Josephine Sullivan. One millisecond face alignment with an ensemble of
regression trees. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 1867–1874, 2014.
8. Jiwen Lu, Xiuzhuang Zhou, Yap-Pen Tan, Yuanyuan Shang, and Jie Zhou. Neighborhood
repulsed metric learning for kinship verification. IEEE transactions on pattern analysis and
machine intelligence, 36(2):331–345, 2014.
9. Ville Ojansivu and Janne Heikkil¨
a. Blur insensitive texture classification using local phase
quantization. In International conference on image and signal processing, pages 236–243.
Springer, 2008.
10. Ming Shao, Siyu Xia, and Yun Fu. Genealogical face recognition based on ub kinface
database. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE
Computer Society Conference on, pages 60–65. IEEE, 2011.
11. Paul Viola and Michael Jones. Rapid object detection using a boosted cascade of simple
features. In Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of
the 2001 IEEE Computer Society Conference on, volume 1, pages I–I. IEEE, 2001.
12. Mengyin Wang, Zechao Li, Xiangbo Shu, Jinhui Tang, et al. Deep kinship verification. In
Multimedia Signal Processing (MMSP), 2015 IEEE 17th International Workshop on, pages
1–6. IEEE, 2015.
13. Siyu Xia, Ming Shao, and Yun Fu. Kinship verification through transfer learning. In IJCAI,
pages 2539–2544, 2011.
14. Haibin Yan. Kinship verification using neighborhood repulsed correlation metric learning.
Image and Vision Computing, 60:91–97, 2017.
15. Haibin Yan, Jiwen Lu, Weihong Deng, and Xiuzhuang Zhou. Discriminative multimetric
learning for kinship verification. IEEE Transactions on Information forensics and security,
9(7):1169–1178, 2014.
16. Haibin Yan, Jiwen Lu, and Xiuzhuang Zhou. Prototype-based discriminative feature learning
for kinship verification. IEEE Transactions on cybernetics, 45(11):2535–2545, 2015.
17. Xiuzhuang Zhou, Yuanyuan Shang, Haibin Yan, and Guodong Guo. Ensemble similarity
learning for kinship verification from facial images in the wild. Information Fusion, 32:40–
48, 2016.
... They achieved 69.75% verification accuracy. Chergui et al. 22 introduced a method in which two images were taken as input and then gave kinship results (kin or non-kin). Their approach was dependent on local phase quantization (LPQ), local directional pattern (LDP) features descriptors, and the multi-level (ML) representation for verifying the kinship. ...
... Several representation methods could be applied to achieve this task, such as sum, multiplication, division, and absolute difference. According to the literature 22,25,64,65 , the absolute difference was found to be the best choice for merging the two feature vectors. This could also be proven in the proposed system, where comparing its results with the other representation methods shows superior results of the absolute difference as shown in Table 6. ...
Preprint
Full-text available
Nowadays, kinship verification is considered an attractive research area with a great interest in computer vision. It significantly affects applications in the real world, such as finding missing individuals, forensics, and genealogical research. However, verifying kinship relations between people using facial images is not straightforward. Many limitations affect kinship verification accuracy. Therefore, this paper proposes a new approach for verifying kinship based on facial image analysis. The proposed approach goes into six stages: preprocessing, feature extraction, feature normalization, feature fusion, feature representation, and kinship verification. The preprocessing stage is responsible for converting RGB images into other color models. Different types of handcrafted feature descriptors (i.e., color and texture descriptors) are extracted in the feature extraction stage. The texture features are represented by scale invariant feature transform (SIFT), local binary pattern (LBP), and heterogeneous auto-similarities of characteristics (HASC), whereas the color features are represented by color correlogram (CC) and dense color histogram (DCH). Then, all the features are set to the same range in the feature normalization stage to be suitable for feature fusion. The feature fusion stage takes place where the different types of features are concatenated together. Next, in the feature representation stage, the parent and child features are gathered into one feature vector. Finally, the kinship verification stage produces the final decision of being kin or non-kin using the gentle AdaBoost ensemble classifier. KinFaceW-I and KinFaceW-II datasets were used to evaluate the proposed approach, where the obtained results were 79.54\% and 90.65\%, respectively. It is noteworthy that the proposed approach outperforms many state-of-the-art approaches that verify kinship, including those dependent on metric learning and deep convolutional neural nets (CNNs).
... Table 10. On KinfaceW-I dataset the minimum gain of 1.42% over best performing MKSM [49], KinfaceW-II the minimum gain of 0.40% over best performing MKSM [49], for cornell dataset the minimum gain of 1.42% over best performing _ [61], ubkinface the minimum gain of 10.05% over best performing SPDTCWT [8], for FIW dataset the minimum gain of 1.32% over best performing SPDTCWT [8]. ...
Article
Full-text available
Kinship verification refers to comparing similarities between two different individuals through their facial images. In this context, feature descriptors play a crucial role, and few feature descriptors are present in literature to extract kin features from facial images. In this paper, we propose a binary cross-coupled discriminant analysis (BC2DA) based feature descriptor which is able to extract effective kin features from input facial image pairs. This method reduces the discrimination between kin pairs at the feature extraction stage itself. BC2DA converts original kin image pairs to encoded image pairs to reduce the discrimination between them. To make better use of tri-subject kin relations, we further propose multi cross-coupled discriminant analysis (MC2DA). This method reduces the discrimination between child and both parents’ images at the feature extraction stage. Extensive experiments were conducted on six kinship datasets such as KinfaceW-I/II, Cornell, FIW, TSKinface UBKinface to show the efficacy of the proposed algorithm.
... Indeed, the basic principle of the representing image locally relies upon various strategies: grid of blocks, different facial parts and set of landmarks [39]. For instance, [12,14,16,75,92,93] adopted the grid of non-overlapping blocks strategy. Alternatively, [30,33,61,69,87,94] suggested another strategy which is different local facial parts, while [91,99] followed the strategy of detection of facial landmarks and interest points. ...
Article
Full-text available
Analysis of facial images decoding familial features has been attracting the attention of researchers to develop a computerized system interested in determining whether a pair of facial images have a biological kin relationship or not. Given that not all regions of an image are useful to determine the kin relation, thus it is possible to obtain irrelevant and inaccurate information of kinship clues, resulting in false matched kinship. Thus, combining all these regions together will likely produces redundant, irrelevant and deceptive information of kinship, along with higher dimensional space. Motivated by the fact that the facial resemblance among the members in a family can be presented separately in different regions of facial images, where each independent region renders different familial features, there is a high probability that selecting and fusing only the most informative local regions and removing the irrelevant can obtain complementary information for further enhanced accuracy. To this end, unlike other methods, the Fusion of the Best Overlapping Blocks with Siamese Convolutional Neural Network (SCNN-FBOB) is an enhanced method for kinship verification in this paper. This method aimed to simultaneously remove the weak local blocks of the image from a set of overlapping local blocks that achieved low accuracy and only retain the local blocks that achieved high accuracy. Extensive experiments conducted on the benchmark KinFaceW-I and KinFaceW-II databases show highly competitive results over many other state-of-the-art methods.
... Experimental findings indicated that their technique was capable of achieving good results compared to the existing techniques for verifying kinship. Chergui et al. (2018) proposed that in order to verify kin relationship from face images, they used LPQ, local directional pattern (LDP), and multi-level (ML) descriptors. Their experimental findings showed that their technique produced a better performance than previous techniques. ...
Article
Full-text available
Background and Objectives Kinship verification and recognition (KVR) is the machine’s ability to identify the genetic and blood relationship and its degree between humans’ facial images. The face is used because it is one of the most significant ways to recognize each other. Automatic KVR is an interesting area for investigation. It greatly affects real-world applications, such as searching for lost family members, forensics, and historical and genealogical studies. This paper presents a comprehensive survey that describes KVR applications and kinship types. It presents a literature review of current studies starting from handcrafted passing through shallow metric learning and ending with deep learning feature-based techniques. Furthermore, kinship mostly used datasets are discussed that in turn open the way for future directions for the research in this field. Also, the KVR limitations are discussed, such as insufficient illumination, noise, occlusion, and age variations problems. Finally, future research directions are presented, such as age and gender variation problems. Methods We applied a literature survey methodology to retrieve data from academic databases. An inclusion and exclusion criteria were set. Three stages were followed to select articles. Finally, the main KVR stages, along with the main methods in each stage, were presented. We believe that surveys can help researchers easily to detect areas that require more development and investigation. Results It was found that handcrafted, metric learning, and deep learning were widely utilized in kinship verification and recognition problem using facial images. Conclusions Despite the scientific efforts that aim to address this hot research topic, many future research areas require investigation, such as age and gender variation. In the end, the presented survey makes it easier for researchers to identify the new areas that require more investigation and research.
... Chergui et al. [5] proposed approach based on ML-LPQ and ML-LDP features. They applied their method on both Cornell KinFace and UB Kin databases. ...
Conference Paper
Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, we propose a novel approach based on Local Ternary Patterns (LTP) and the Multi-Level (ML) representation. Also, we investigate the effect of ML and Multi-Block (MB) representation for facial kinship verification , and the effect of different features representation. Moreover, the use of Fisher Score to reduce the number of features and the support vector machine (SVM) for the kinship classification. Our approach consists of six stages which are : (i) face preprocessing, (ii) features extraction, (iii) face representation (iv) pair features representation and normalization, (v) features selection and (vi) classification. The proposed approach is tested and analyzed on three publicly available databases (Cornell KinFace, UB Kin database, Familly 101, KinFac W-I and W-II). The obtained results are good comparable with the state-of-art approaches.
... Chergui et al. [7] proposed approach besed on the LPQ and LDP features descriptors and the ML features representation they applied their method on both Cornell KinFace and UB Kin databases. Also, they proposed another approach in [6] based on the LBP and BSIF features descriptors and the PML features representation they applied their method on both Cornell KinFace and UB Kin databases. ...
Conference Paper
Facial kinship verification is an interesting problem due to its potential applications such as organizing the photo albums, images annotation and recognizing resemblances among humans. In this paper, we propose a novel approach based on Weber Local Descriptor (WLD) along with RGB color space and Pyramid Multi-Level (PML) representation for facial kinship verification. In particular, we investigate the leverage of Locality Sensitive Discriminant Analysis (LSDA) to reduce the number of features and the Linear Discriminant Analysis (LDA) for the kinship classification. The approach consists of three main stages which are: (1) face pre-processing , (2) features extraction, and (3) kinship classification (kin or non kin). The proposed approach is tested and analyzed on three publicly available databases (Cornell KinFace, UB Kin database, Familly 101). The obtained results are either better or comparable with the state-of-art approaches.
Article
The use of facial images in the kinship verification is a challenging research problem in soft biometrics and computer vision. In our work, we present a kinship verification system that starts with pair of facial images of the child and parent, then as a final result is determine whether two persons have a kin relation or not. our approach contains five steps as follows: (i) the face preprocessing step to get aligned and cropped facial images of the pair (ii), extracting deep features based on the deep learning model called Visual Geometry Group (VGG) Face, (iii) applying our proposed pair feature representation function alongside with a features normalization, (iv) the use of Fisher Score (FS) to select the best discriminative features, (v) decide whether there is a kinship or not based on the Support Vector Machine (SVM) classifier. We conducted several experiments to demonstrate the effectiveness of our approach that we tested on five benchmark databases (Cornell KinFace, UB KinFace, Familly101, KinFace W-I, and KinFace W-II). Our results indicate that our system is robust compared to other existing approaches.
Conference Paper
The kinship verification through facial images is ana ctive research topic due to its potential applications. In this paper, we propose an approach which takes two images as input then give kinship result (kinship / No-kinship) as an output. our approach based on the deep learning model (ResNet) for the feature extraction step, alongside with our proposed pair feature representation function and RankFeatures (Ttest) for feature selection to reduce the number of features finally we use the SVM classifier for the decision of kinship verification. The approach contains three steps which are: (1) face preprocessing, (2) deep features extraction and pair features representation (3) Classification. Experiments are conducted on five public databases. The experimental results show that our approach is comparable with existed approaches.
Conference Paper
The kinship verification through facial images is an active research topic due to its potential applications. In this paper, we propose an approach which takes two images as an input then give kinship result (kinship / No-kinship) as an output. The approach contains five steps which are : (1) face preprocessing, (2) deep features extraction, (3) pair features representation and normalization, (4) features selection, (5) kinship verification. Experiments are conducted on five public databases (Cornell KinFace, UB Kin database, Familly, KinFaceI, and KinFace-II). The experimental results shows that our approach is comparable with existed approaches.
Thesis
Full-text available
Soft biometrics topic attracted a lot of attention recently due to its ability to improve biometrics systems. It has a lot of traits which can be used in biometrics. Some of these traits is most popular among the other traits. These traits are called demographic traits (ie. age, gender, and ethnicity). It belongs to facial soft biometrics traits. Recently, several applications that exploit demographic attributes have emerged. These applications include : access control, reidentification in surveillance videos, integrity of face images in social media, intelligent advertising, human-computer interaction, and law enforcement. In this dissertation, facial demographic estimation through facial images is studied. Starting with the existing techniques like Deep Learning-based approaches, Image-Based approaches, and Anthropometrics-based approaches. Also, the databases used for age estimation, gender classification or ethnicity classification are exploited. Moreover, the different evaluation terms are mentioned. Ending with the proposed approach and the results on different databases. The proposed approach consists of the following three main stages: 1) face alignment and preprocessing; 2) feature extraction and selection; 3) demographic estimation. The purpose of face alignment is to localize faces in images, rectify the 2D or 3D pose of each face and crop the region of interest. This preprocessing stage is important since the subsequent stages depend on it and since it can affect the final performance of the system. The processing stage can be challenging since it should overcome many variations that may appear in the face image. Feature extraction and selection stage extract the face features. These features are extracted either by a holistic method or by a local method. The extracted features are then selected using a supervised feature selection method in order to omit possible irrelevant features. In the last stage, we propose to feed the obtained features to a hierarchical estimator having three layers where we firstly classify the ethnicity and the gender then we estimate the age. Finally, the obtained results using different databases was stable and good compared with the state of the art methods. The proposed approach is also suited for real-time applications.
Conference Paper
Full-text available
The aim of this work is to investigate global and local image descriptors impact on facial gender classification by carrying out an independent comparative study among several texture descriptors algorithms. In this paper, we consider three global descriptors namely, Gray-Level Co-Occurrence Matrix (GLCM), Gabor Wavelet Transform (GWT) and Autocorrelation Function (ACF). On the other hand, we consider four local image descriptors called, Local Binary Patterns (LBP), Local Directional Pattern (LDP), Local Phase Quantization (LPQ) and Binarized Statistical Image Features (BSIF). The experimental comparison proofs that the local image descriptors are more efficient than the global ones in facial gender classification. All the experiments conducted on the Image of Groups (IoG) database.
Conference Paper
Full-text available
Facial demographic classification is an attractive topic in computer vision. Attributes such as age and gender can be used in many real life application such as face recognition and internet safety for minors. In this paper, we present a novel approach for age estimation and gender classification under uncontrolled conditions following the standard protocols for fair comparaison. Our proposed approach is based on Multi Level Local Phase Quantization (ML-LPQ) features which are extracted from normalized face images. Two different Support Vector Machines (SVM) models are used to predict the age group and the gender of a person. The experimental results on the benchmark Image of Groups dataset showed the superiority of our approach compared to the state-of-the-art.
Conference Paper
Full-text available
This paper addresses the problem of Face Alignment for a single image. We show how an ensemble of regression trees can be used to estimate the face's landmark positions directly from a sparse subset of pixel intensities, achieving super-realtime performance with high quality predictions. We present a general framework based on gradient boosting for learning an ensemble of regression trees that optimizes the sum of square error loss and naturally handles missing or partially labelled data. We show how using appropriate priors exploiting the structure of image data helps with ef-ficient feature selection. Different regularization strategies and its importance to combat overfitting are also investi-gated. In addition, we analyse the effect of the quantity of training data on the accuracy of the predictions and explore the effect of data augmentation using synthesized data.
Article
We present a novel learning system for human demographic estimation in which the ethnicity, gender and age attributes are estimated from facial images. The proposed approach consists of the following three main stages: 1) face alignment and preprocessing; 2) constructing a Pyramid Multi-Level face representation from which the local features are extracted from the blocks of the whole pyramid; 3) feeding the obtained features to an hierarchical estimator having three layers. Due to the fact that ethnicity is by far the easiest attribute to estimate, the adopted hierarchy is as follows. The first layer predicts ethnicity of the input face. Based on that prediction, the second layer estimates the gender using the corresponding gender classifier. Based on the predicted ethnicity and gender, the age is finally estimated using the corresponding regressor.
Article
Kinship verification is an interesting and challenging problem in human face analysis, which has received increasing interests in computer vision and biometrics in recent years. This paper presents a neighborhood repulsed correlation metric learning (NRCML) method for kinship verification via facial image analysis. Most existing metric learning based kinship verification methods are developed with the Euclidian similarity metric, which is not powerful enough to measure the similarity of face samples, especially when they are captured in wild conditions. Motivated by the fact that the correlation similarity metric can better handle face variations than the Euclidian similarity metric, we propose a NRCML method by using the correlation similarity measure where the kin relation of facial images can be better highlighted. Since negative kinship samples are usually less than positive samples, we automatically identify the most discriminative negative samples in the training set to learn the distance metric so that the most discriminative encoded by negative samples can better exploited. Experimental results show the efficacy of the proposed approach.
Article
Kin relationship has been well investigated in psychology community over the past decades, while kin verification using facial images is relatively new and challenging problem in biometrics society. Recently, it has attracted substantial attention from biometrics society, mainly motivated by the relative characteristics that children generally resemble their parents more than other persons with respect to facial appearance. Unlike most previous supervised metric learning methods focusing on learning the Mahalanobis distance metric for kin verification, we propose in this paper a new Ensemble similarity learning (ESL) method for this challenging problem. We first introduce a sparse bilinear similarity function to model the relative characteristics encoded in kin data. The similarity function parameterized by a diagonal matrix enjoys the superiority in computational efficiency, making it more practical for real-world high-dimensional kinship verification applications. Then, ESL learns from kin dataset by generating an ensemble of similarity models with the aim of achieving strong generalization ability. Specifically, ESL works by best satisfying the constraints (typically triplet-based) derived from the class labels on each base similarity model, while maximizing the diversity among the base similarity models. Experiments results demonstrate that our method is superior to some state-of-the-art methods in terms of both verification rate and computational efficiency.
Article
In this paper, we propose a new prototype-based discriminative feature learning (PDFL) method for kinship verification. Unlike most previous kinship verification methods which employ low-level hand-crafted descriptors such as local binary pattern and Gabor features for face representation, this paper aims to learn discriminative mid-level features to better characterize the kin relation of face images for kinship verification. To achieve this, we construct a set of face samples with unlabeled kin relation from the labeled face in the wild dataset as the reference set. Then, each sample in the training face kinship dataset is represented as a mid-level feature vector, where each entry is the corresponding decision value from one support vector machine hyperplane. Subsequently, we formulate an optimization function by minimizing the intraclass samples (with a kin relation) and maximizing the neighboring interclass samples (without a kin relation) with the mid-level features. To better use multiple low-level features for mid-level feature learning, we further propose a multiview PDFL method to learn multiple mid-level features to improve the verification performance. Experimental results on four publicly available kinship datasets show the superior performance of the proposed methods over both the state-of-the-art kinship verification methods and human ability in our kinship verification task.
Article
In this paper, we propose a new discriminative multimetric learning method for kinship verification via facial image analysis. Given each face image, we first extract multiple features using different face descriptors to characterize face images from different aspects because different feature descriptors can provide complementary information. Then, we jointly learn multiple distance metrics with these extracted multiple features under which the probability of a pair of face image with a kinship relation having a smaller distance than that of the pair without a kinship relation is maximized, and the correlation of different features of the same face sample is maximized, simultaneously, so that complementary and discriminative information is exploited for verification. Experimental results on four face kinship data sets show the effectiveness of our proposed method over the existing single-metric and multimetric learning methods.