Content uploaded by Alaa S. Al-Waisy

Author content

All content in this area was uploaded by Alaa S. Al-Waisy on May 11, 2015

Content may be subject to copyright.

IJAC: Volume 5, No. 2, July-December 2012, pp. 115-126 International Science Press: ISSN: 0974-6277

Detection and Recognition of Human Faces Based on Hybrid Techniques

Alaa Sulaiman Al-Waisy*

*Department of Computer Science, Alma’ref University College, Iraq,

E-mail: king_alaa87@yahoo.com

Abstract: In this paper, a novel face detection and recognition approaches based on learning and

transformation techniques, is implemented. Detecting faces across multiple views is more challenging than

in a frontal view. To address this problem, an efficient approach is presented in this paper using a kernel

machine based approach for learning such nonlinear mappings to provi de effec tive view-based

representation for multi-view face detection. In this paper Kernel Principal Component Analysis (KPCA) is

used to project data into the view-subspaces then computed as view-based features. Multi-view face detection

is performed by classifying each input image into face or non-face class, by using a two class Kernel Support

Vector Classifier (KSVC). After detecting the input image is face or not the curvelet transform is applied on

the face image as features extraction method to reduce the dimensionality that reduce the required

computational power and memory size. Then the Nearest Mean Classifier (NMC) is adopted to recognize

different faces.Experimental results demonstratesuccessful face detection and recognition over a wide range

of facial variation in color, illumination conditions, position, scale, orientation, 3D pose, and expression in

images from several photo collections.

Keywords: Face Detection, Face Recognition, Kernel Principal Component Analysis, Kernel Support Vector

Machine.

1. INTRODUCTION

Biometric recognition systems based on face recognition have shown excellent performance in the area of

secured to buildings/airports/seaports, border checkpoints, law enforcement, surveillance systems and

so on. Face recognition problem is very challenging because of variations in different face images of the

same person due to changes in facial expressions, multi-poses, illumination conditions, rotation, age, and

presence of beard, moustache and etc. [1]. Therefore developing a computational model of face recognition

is quite difficult, because faces are very complex. In general, a face recognition system involves three

important stages: Face detection, Feature extraction and (identification and/or verification).

Face detection is the first stage of an automated face recognition system, since a face has to be located

in the overall image before it is recognized[2].As computers become faster and more affordable, many

applications that use face detection/ localization are becoming an integral part of daily life. For example,

face identification system, face tracking, video surveillance and security control system, and human

computer interface. Those applications often require detected and segmented human face which is ready

to be processed[3],[4]. However detecting a face under various environments is still challenging work.

Some factors make face detection difficult. One is the variety of colored lighting sources; another is that

facial features such as eyes may be partially or wholly occluded by a shadow generated by a bias lighting

direction; and others are race and different face poses with/without glasses. Finally because faces are not

rigid and have a high degree of variability in size, shape, color, and texture[5]. Therefore detection rate

and the number of false positives are important factors in evaluating face detection systems[6]. This paper

describes progress toward a system which can detect faces regardless of pose reliably and in real-time. In

the presented system a kernel machine learning based approach for extracting nonlinear features of face

images and using them for multi-view face detection. KPCA is applied on a set of view-labeled face

images to learn nonlinear view-subspaces. Nonlinear features are the projections of the data onto these

nonlinear view-subspaces. Face detection is performed by using KSVC as the classifying function, based

on the nonlinear features. One distinctive advantage this type of classifiers has over traditional neural

networks is that Support Vector Machines (SVM) achieve better generalization performance. While neural

networks such as Multiple Layer Perceptrons (MLPs) can produce low error rate on training data, there

Detection and Recognition of Human Faces Based on Hybrid Techniques

116

is no guarantee that this will translate into good performance on test data[3]. The literature on SVMs

includes many types of pattern recognition topics like face authentication, face recognition, object detection,

text classification, image classification and voice identification [7]. In the next stage, of the implemented

system one of the most important transformation methods is applied based on the curvelet transformation.

The curvelet transform is applied as features extraction method to reduce the dimensionality that reduce

the required computational power and memory size. Then the Nearest Mean Classifier (NMC) is adopted

to recognize different faces. The results show that the implemented approaches yields high detection and

recognition rates even under different conditions of lighting or when add some noise to the testing images.

The remainder of the paper is organized as follows: Section 2 introduces basic concepts of face detection

and recognition methods (KPCA, KSVC and Curvelet Transform). Section 3 introduces the literature survey

about the face detection and recognition systems. Section 4 the implementedface detection and recognition

system. Section 5 shows experimental results. Section 6 the conclusion.

2. FACE DETECTION AND RECOGNITION

To address the face detection and recognition problems, an efficient approach is presented in this paper.

In the first problem, the kernel methods generalize linear SVC and PCA to nonlinear ones is used.The

trick of kernel methods is to perform dot products in the feature space by using kernel functions in input

space so that the nonlinear mapping is performed implicitly in the input space. In the second problem,

one of the transformation methods is implemented as features extraction technique. The transformation

is a process that transforms an object from a given domain to another which can be used for its recognition.

A large class of image processing transformations is linear in nature an output image is formed from

linear combinations of pixels of an input image [8].

2.1 Kernel Principal Components Analysis (KPCA)

The KPCA is the nonlinear version of PCA that is constructed by using a specified kernel function. As a

simple description for the PCA. The PCA is used to lower the dimensional space of the feature to reduce

the time complexities [9]. Eigenvectors of the covariance matrix of the face images constitute the eigenfaces.

The dimensionality of the face feature space is reduced by selecting only the eigenvectors possessing

largest eigenvalues Once the new face space is constructed, when a test image arrives, it is projected onto

this face space to yield the feature vector the representation coefficients in the constructed face space. The

classifier decides for the identity of the individual. according to a similarity score between the test image's

feature vector and the PCA feature vectors of the individuals in the database[10]. Using PCA for eigenfaces

method, feature vectors identifying each image can be obtained as follows:

Given a set of examples in RN represented by column vectors, subtract them by the their mean vector

to obtain the centered examples xi RN (i = 1, …., m). The covariance matrix is

C=1

1m

T

j j

j

x x

m(1)

Linear PCA is an algorithm which diagonalizes the covariance matrix by performing a linear

transformation. By using a nonlinear mapping, the data set can be mapped into a higher dimensional

feature space H. The representation of features in this high dimensional feature space helps the classifier

to perform better. Fortunately, for certain feature spaces H there is a function for computing scalar products

in feature spaces. This is known as a kernel function. By using a kernel function, every linear algorithm

that uses scalar products can be implicitly executed in H without explicitly knowing the mapping ,

constructing a nonlinear version of a linear algorithm. The nonlinear version of PCA that is constructed

by using a kernel function is known as kernel principal component analysis KPCA.Let us now generalize

classic PCA to kernel PCA. Let : x RN X H be a mapping from the input space to a high dimensional

feature space [11]. The covariance matrix in H is

C

=1

1

( ) ( )

m

T

j j

j

x x

m(2)

Detection and Recognition of Human Faces Based on Hybrid Techniques

117

To do this, we have to find the eigenvalues 0 and eigenvectors satisfying

=

C

(3)

All solutions v with 0 must lie in the span of (x1), (x2), …, (xm). Hence Eq. 2 is equivalent to

( ( ). )

k

x v

=

( ( ) . )

k

x Cv

(4)

For k = 1,2, ….., m

Because all v for nonzero must lie in the span of the xk’s, there exist coefficients ai such that

v=

1

( )

m

i i

i

x

(5)

Defining the matrix K = [Ki; j]m × m, the eigenvalue problem can be converted into the following:

m=K(6)

for nonzero eigenvalues. Sort i in descending order and use the first M m principal components vi

as the basis vector in H (In fact, there are usually some zero eigenvalues, in which case M < m). The M

vectors spans a linear subspace, called KPCA subspace, in H. The projection of a point x onto the k-th

kernel principal component vk is calculated as:

(vk. (x)) =

,

, ,

1 1

( ( ). ( )) ( )

m m

i ik i k i

i i

x x K x x

�(7)

2.2 Kernel Support Vector Machines (KSVMs)

Support vector machines (SVMs) are a popular method for binary classification. SVMs can be seen as an

extension of the perceptron, which tries to find a hyperplane that separates the data [12]. A more detailed

discussion of the theory and applications of SVMs can be found in [7]. Consider the problem of separating

the set of training vectors belonging to two classes, given a set of training data (xi, yi), ……, (xm, ym) where

xi RN is a feature vector and yi {–1, +1} its class label. If the two classes are linearly separable, there

exists a separating hyperplane (w, b) is given by the function:

f(x) =

1

( . )

m

i i i

i

sign a y x x b

(8)?

In real-life problems it is rarely the case that positive and negative samples arelinearly separable.

Non-linear support vector classifiers map input space RN to a high dimensional feature space H by x

(x) H such that the mapped data is linearly separable in the feature space [13]. In short, a hard margin

SVM solves the quadratic program (QP1) which is given as follows. Find the Lagrange multipliers

1

{ }N

i

that maximize the objective function:

Q( ) = 1 1 1

1

( , )

2

m m m

i i j i j i j

i i j

y y k x x

(9)

Subject to

1

m

i i

i

y

= 0, 0 i C(10)

Where C is a user-specified positive parameter. If 0 i C, the corresponding data points are called

support vectors. Having the Lagrange multipliers, the optimum weight vector w could be computed by:

Detection and Recognition of Human Faces Based on Hybrid Techniques

118

w=

1

( )

m

i i i

i

y x

(11)

By taking the samples with 0 < i < C, the bias could be calculated by

b=

1 1

( , )

No. SV i j

i i j i

i

x SV x SV

y K x x

y(12)

Where No. SV is the number of support vectors with, 0 < i < C. Next, a separating hyper plane is

computed in H. The decision function becomes

f(x) =

1

( , )

S

N

i i i

i

sign y K x x b

(13)

Where Ns is the number of support vectors and K( , ) is a kernel function. Several kernels are possible

including radial basis functions, polynomial and sigmoid kernels. The choice of the kernel and kernel

parameters (e.g. the degree of the polynomial kernel) has to bemade by the user, and the optimal choices

are problem dependent [14], [15].

2.3 Curvelet Transform

To overcome the innate limitations of traditional multi-scale representations such as wavelet a novel

transform has been developed by Candes and Donoho in 1999 known as curvelet transform. Curvelet

transform is a multiscale pyramid with many directions, positions at each length and fine scales [16]. The

motivation for the development of the new transform was to find a way to represent edges and other

singularities along curves in a way that was more efficient than existing methods, that is, less coefficients

are required to reconstruct an edge to a given degree of accuracy [17].

The curvelet transform, like the wavelet transform, is a multiscale transform, with frame elements

indexed by scale and location parameters. Unlike the wavelet transform, it has directional parameters,

and the curvelet pyramid contains elements with a very high degree of directional specificity. Also, the

curvelet transform is based on a certain anisotropic scaling principle which is quite different from the

isotropic scaling of wavelets. In wavelet's isotropic principle, the length and width of support frame is of

equal size whereas in curvelet transform the width and length are related by the relation width length2

that is known as parabolic or anisotropic scaling [16]. There are two generations of the curvelet transform.

The idea of The First Generation Discrete Curvelet Transform (DCTG1) is first to decompose the image

into a set of wavelet bands, and analyse each band by a local ridgelet transform. It results in a large

amount of redundancy. Moreover, this process is very time consuming, which makes it less feasible for

facial features analysis in a large database [18].

To overcome on the all the drawbacks in the (DCTG1) such as the parabolic scaling ratio width length2

is not completely true and time consuming. The Second Generation Curvelet Transform (DCTG2)

introduced in 2006 is not only simpler, but is faster and less redundant compared to its first generation

version [19]. Currently two implementations of fast (DCTG2) are available i.e. Unequally-Spaced Fast

Fourier Transform (USFFT) Based Curvelet and Frequency Wrapping Based Curvelet. The difference is

the choice of spatial grid used to translate curvelet at each scale and angle [16].

3. LITERATURE SURVEY

Automatic face detection and recognition problems have attracted many researchers and scientists and as

a consequence, several techniques have been developed to solve these problems. Amongst all these

numerous techniques very few are capable of solving these problems in unconstrained environment.

Generally, several researchers in the field of face detection and recognition developed different detection

and/or recognition algorithms. Some of these researches are summarized below:

Detection and Recognition of Human Faces Based on Hybrid Techniques

119

Min-Quan Jing and Ling-Hwei proposed a method to detect a face with different poses under various

environments. On the basis of skin color information, skin regions are first extracted from an input image.

Next, the shoulder part is cut out by using shape information and the head part is then identified as a face

candidate. For a face candidate, a set of geometric features is applied to determine if it is a profile face. If

not, then a set of eyelike rectangles extracted from the face candidate and the lighting distribution are

used to determine if the face candidate is a nonprofile face [20]. YongqiuTu, Faling Vi and et al proposed

a face detector has been designed using multi-classifier combination method. The proposed detector

composes of three classifiers: Skin color detector, AdaBoost detector based on haar-Iike features, and eye-

mouth detector, a semi-serial architecture is designed to combine the three detectors ,which set up the

division and cooperation system and draw on each other's merits to implement the quick and efficient

facial detection [21]. PaymanMoallem and BibiSomayeh proposed a fuzzy rule based system for pose,

size and position independent face detection in color image. Subtractive clustering method is also applied

to decide on the numbers of membership functions. In the proposed system, skin-color, lips position, face

shape information and ear texture properties are the key parameters fed to the fuzzy rule based classifier

to extract face candidate in an image.Furthermore, the applied threshold on the face candidates is optimized

by genetic algorithm [22].

NiuLiping, Li XinYuan and et al presented a hybrid approach based on Bayesian and wavelet transform

for face recognition is proposed. Firstly the system uses (PCA) to select the first 10 candidate images.

Then these candidate images and each testing image are decomposed into low frequency and high

frequency sub-band images by applying wavelet transform. Finally Bayesian recognition is parallel

processed using these sub-band images [23].

Mohammed Rziza , Mohamed El .Aroussi and et al in this work, an efficient local appearance feature

extraction method based the Curvelet Transform (CT) is proposed in order to further enhance the

performance of the well known Linear Discriminant Analysis (LDA)method when applied to face

recognition [24].

Dinesh Kumar, Shakti Kumar and et al presented a PCA-Memetic Algorithm (PCA-MA) approach

for feature selection. The ( PCA) has been extended by MAs where the former was used for feature

extraction/dimensionality reduction and the latter exploited for feature selection. Simulations were

performed over ORL and YaleB face databases using Euclidean norm as the classifier. The same approach

has also been applied to (LDA) and Kernel PCA approaches with the MA [25].

4. IMPLEMENTED SYSTEM DESIGN

The implemented face recognition system can be summarized in the five steps as shown in the Figure (1).

Figure 1: The Block Diagram of the Implemented Face Detection System

Detection and Recognition of Human Faces Based on Hybrid Techniques

120

4.1 Image Capture Step

Face recognition attracts researchers' attentions for a long time and has already achieved high level success.

Most of these researches assume a clear and size-enough facial image which is available. Unfortunately,

the facial image quality can not be guaranteed in long distance person identification. Therefore performance

of automatic face recognition is generally determined by the quality of the photographic images used. In

constructed face database the white background is used, provided there is sufficient distinction between

the face/hair area and the background. Only one person should be present in the photograph and no

other person or object was present in the background covered in the face image.

4.2 Preprocessing Step

Any face recognition algorithm relies on the preprocessing operations implemented before the application

of the actual recognition algorithm. Here it is proposed a simple image preprocessing chain that appears

to work well for a wide range of biometrics recognition, reduce the computational time, eliminating many

of the effects of changing illumination and the noise while still preserving most of the appearance details

needed for recognition.

4.2.1 Image Size Normalization

Size normalization is an important pre-processing technique in face detection and recognition. Although

various effective learning-based methods have been proposed. It is usually done to change the acquired

image size to a default image size. In this paper the default image size is 256 × 256, on which the proposed

face detection system operates.

4.2.2 Median Filtering

Median filtering follows this basic prescription. The median filter is normally used to reduce noise in an

image especially obtained from a camera, somewhat like the mean filter. However, it often does a better job

than the mean filter of preserving useful detail in the image. This class of filter belongs to the class of edge

preserving smoothing filters which are non-linear filters. This means that for two images A(x) and B(x):

median [ ( ) ( )] median [ ( )] median [ ( )]A x B x A x B x

These filters smooth the data while keeping the small and sharp details. The median is just the middle

value of all the values of the pixels in the neighborhood. Note that this is not the same as the average

(or mean); instead, the median has half the values in the neighborhood larger and half smaller. The median

is a stronger “central indicator” than the average. In particular, the median is hardly affected by a small

number of discrepant values among the pixels in the neighborhood. Consequently, median filtering is

very effective at removing various kinds of noise. Figure (2) illustrates an example of median filtering.

Figure 2: Median Filtering Operation

Detection and Recognition of Human Faces Based on Hybrid Techniques

121

4.3 Face Detection Step

In this stage a kernel machine based approach has been presented for learning view-based representations

for multi-view face detection.

4.3.1 Kernel Principle Component Analysis (KPCA)

This algorithm can be achieved based on ability of the KPCA. KPCA feature extraction effectively acts a

nonlinear mapping from the input space to an implicit high dimensional feature space. It is hoped that

the distribution of the mapped data in the implicit feature space has a simple distribution so that a simple

classifier (which need not to be a linear one) in the high dimensional space could work well. The steps to

compute the principal components can be summarized as:

• Compute the matrix K, see Eq. (2). In this paper the polynomial kernel is used as a kernel function:

K(xi, xj) = (a(xi, xj) + b)n(15)

Where a = 0.001; b = – 1; n = 3 and xk RN are taken from the face images by rearranging the pixel

value order as shown in Figure (3).

Figure 3: Pre-processing Step

• To acquire the eigenfaces, the face image data are converted from matrices to vectors, where the

vector version of each face is a column in a matrix. For example, for a training set of 250 images

each have 256 × 256 pixels will be converted into a matrix of that is 65536 × 250. This matrix of

face vectors uses KPCA to compute the eigenfaces.

• The resulting eigenfaces are then point multiplied with the training set images to filter out outlier

data and focus the training on the principal features of the face. The resulting images have their

intensities scaled.

• The first M = 50 most significant principal components are used as the basic vectors.Which is

aiming to train the KSVC to differentiate between face and non-face patterns for face detection.

• Compute projections of a test point onto the eigenvectors.

4.3.2 Kernel Support Vector Machine (KSVM)

SVMs implement complex decision rules by using a non-linear function to map training points to a

high dimensional feature space where the labelled points are separable.A separating hyperplane is founded

which maximizes the distance between itself and the nearest training points this distance is called the

margin. The hyperplane is, in fact, represented as a linear combination of the training points. The steps to

find an optimal separable hyperplane (decision function) can be summarized as:

• After further refining the images using a principal components approach, The resulting

processed images are converted into input vectors xi and class values yiwhere i {1, ..., p, p + 1, ...,

q} with p and q being the number of user and imposter images, respectively, and x has M dimensions

where the dimension is the number of pixels in the image. For the user images, yj = +1 where

Detection and Recognition of Human Faces Based on Hybrid Techniques

122

j {1, ..., p}, and for the imposter images, yk = – 1 where k {p + 1, ..., q}.

• The data are passed to the SVM training along with the kernel type and kernel parameters. In this

paper is Radial Basis Function (RBF) Kernel which can be calculated as follow:

K(xi, xj) = exp (– Sigma * ||xi – xj||^ 2) (16)

Where Sigma is the width which is specified by the user.

• The decision function will then be tuned to find the optimal SVM parameters for data. After the

training function is complete a training model is returned which will be used for the classification.

• In the testing phase the acquisition and processing steps of the tuning images are the same as that

required for the training data except the images in this set are used for classification not training.

• By using the tuning data sets into the SVM classification function the effectiveness of the system

and its kernels can be tested. The effectiveness is gauged by overall detection rate and false positive

number. Based on the rate for all the test sets the kernel and kernel parameters are adjusted until

the kernel and parameter combination with the highest accuracy can be founded.

4.4 Feature Extraction Step

The feature extraction is an inevitable step in the classification of high-dimensional data. In this stage, the

feature extractor attempts to reduce the data dimensionality by extracting the discriminate facial features

while discarding the features are considered redundant for classification purposes.In the implemented

system, the Curvelet Via Wrapping is applied on the detected face image in the face detection step. Curvelet

transform based on wrapping of Fourier samples takes a 2-D image as input in the form of a Cartesian

array f [n1, n2] such that 0 n1, n2 < N and generates a number of curvelet coefficients indexed by a scale j,

an orientation l and two spatial location parameters (k1, k2) as output. The discrete curvelet transform can

be implemented based on the wrapping algorithm. In this algorithm, four steps are carried out:

• The 2D-Img is first transformed into the frequency domain by forward FFT and obtain Fourier

samples

1 2

ˆ[ , ]f n n

.

• for each scale j and angle l do. Divide FFT into collection of Digital Corona Tiles (Wedges) by

using two windowing functions ‘radial window’ and ‘angular window’.

• Apply the wrapping algorithm to the wedge data.

Apply the inverse 2D FFT to the wrapped data to get curvelet coefficients Figure (4).

Figure 4: Curvelet Coefficients

Detection and Recognition of Human Faces Based on Hybrid Techniques

123

4.5 Classification Step

The classification step is implemented based on the Nearest Mean Classifier (NMC). In the NMC, the

Euclidean distance from each class mean (in this case) is computed for the decision of the class of the

training sample and the test data. In mathematical terms, the Euclidean Distance between the test sample

Ap and any one of the training sample Bp

q

D(q) =

2 2

( ) ( )

( ( ) ) ( ( ) )

q q

m n p p p p

q q

m n p p m n p p

A A B B

A A B B (17)

Where m and n is the dimensions of sample and

p

A

and

q

p

B

is the mean value of the testing and

training samples, respectively. After computing the distance to each class if the testing image as the same

as the training image then the D(q) is equal to one else the return value between (0, 1).

5. EXPERIMENTAL RESULTS

The experiments were done by constructing multi-view face database. This database currently contains

colored face images of 50 persons. Each person is photographed against a uniform white background

using a single camera and identical settings. For each person, we take 10 photographs. Each photograph

has a different combination of viewpoint such as (frontal 0°, right 45°, right 90°, left 45° and left 90°) and

facial expression such as (smiling, laughing, neutral and closed eyes). Figure (5) shows the image variations

for three persons.

Figure 5: Image Variation of Some Persons in the Constructed Multi-View Face Database

In this paper, we compare the performance of the implemented system by computing the face detection

rate and the number of false negative in five cases face detection. Considering the number of the training

and testing images to be 5. In the first case the face images are tested as they are captured. In the second

and third cases the illumination conditions of the training and testing images are changed respectively, as

shown in the Figure (6). In fourth and fifth cases some ratio of noise is added to the training and testing

imagesrespectively (Salt and Pepper type is used in these two cases). The face detection rate and the

number of false negative of the proposed system in these five cases are summarized in Table 1. The

training time and testing time are summarized in Table 2. Every algorithm used 255 images in the training

phase (face and non-face) and the testing time is also calculated for each image.

Detection and Recognition of Human Faces Based on Hybrid Techniques

124

Another experiment was done, the implemented face detection system was tested with theimplemented

curvelet technique together as a completed face recognition system. In this experiment, the image firstly

inputs to face detection system. If the image is detected as the face then it inputs to the curvelet technique

as the feature extraction algorithm. Then the NMC is adopted to recognize different faces. The testing

was done in the five cases as explained in the first experiment based on the same training. The experiment

results in all the five cases were explained in the Table (3).

Figure 6: Illumination Change (a) Ori ginal Image and (b) After Illumination Change

Table 1

The Face Detection Rate and the Number of False Negative

Detection Rate False Negative Number of Image The Case Number

First Case 255 14 94.4

Second Case 255 6 97.6

Case Third 255 15 94

Fourth Case 255 10 96

Fifth Case 255 17 93.2

Table 2

The T raining Time and Testing Time

Testing Time (s) Training Time (s) The Method

KPCA 184.522940 0.101256

KSVM 1.987141 0.00042

KPCA-KSVM 186.10081 0.101676

Table 3

The Face Recognition Rate

The Case Number Recognition Rate

First Case 89.4

Second Case 90

Case Third 89

Fourth Case 87

Fifth Case 86

Detection and Recognition of Human Faces Based on Hybrid Techniques

125

6. CONCLUSION

In this paper, a novel face detection and recognitionapproaches based on learning and transformation

techniques, is implemented. A kernel machine approach has been presented for learning view-based

representation for multi-view face detection. The main part of the this stage is the use of KPCA for extracting

nonlinear features for each view by learning the nonlinear view-subspace using kernel PCA. This is to

construct a mapping view from the input image space, in which the distribution of data points is highly

nonlinear and complex. In lower dimensional space the distribution becomes simpler, tighter and therefore

more predictable for better modeling of faces. The kernel learning approach leads to an architecture

composed of an array of KPCA feature extractors, one for each view. Multi-view face detection is performed

by classifying each input image into face or non-face class, by using a two class Kernel Support Vector

Classifier (KSVC). After detecting the input image is face or not the curvelet transform is applied on the

face imageas features extraction method to reduce the dimensionality that reduce the required

computational power and memory size. Then the Nearest Mean Classifier (NMC) is adopted to recognize

different faces. The experiments were done by constructing multi-view face databaseand the experimental

results demonstrate successful face detection and recognition approaches over a wide range of facial

variation in color, illumination conditions, position, scale, orientation, 3D pose, and expression in images

from several photo collections.

REFERENCES

[1] Mohammad ShahinMahanta, “Linear Feature Extraction with Emphasis on Face Recognition”, Graduate

Department of Electrical and Computer Engineering University of Toronto, Copyright © 2009

[2] James Wayman, Anil Jain, DavideMaltoni and Dario Maio, “Biometric Systems”, British Library Cataloguing

in Publication Data, © Springer-Verlag London Limited 2005.

[3] Christopher A. Waring and Xiuwen Liu, “Face Detection Using Spectral Histograms and SVMs’”, Department

of Computer Science, The Florida State University Tallahassee, FL 32306.

[4] Rudy Adipranata , Eddy, Cherry G. Ballangan, and Ronald P. Ongkodjojo, “Fast Method for Multiple Human

Face Segmentation in Color Image”, International Journal of Advanced Science and Technology. 3, February, 2009.

[5] Min-Quan Jing and Ling-Hwei Chen, “Novel Face-detection Method Under Various Environments”, Optical

Engineering, 48(6), 067202 (June 2009).

[6] LamiaaMostafa and SherifAbdelazeem, “Face Detection Based on Skin Color Using Neural Networks”, GVIP

05 Conference, CICC, Cairo, Egypt , 19-21 December 2005.

[7] Gregory Matthew Wagner, M.S., B.S, “Face Authentication with Pose Adjustment Using Support Vector

Machines with a Hausdorff-based Image Kernel”, Texas Tech University, Gregory Wagner, December 2007.

[8] Gerhard X. Ritter and Joseph N. Wilson, “Handbook of Computer Vision Algorithms in Image Algebra”, ISBN:

0849326362 Pub Date: 05/01/96

[9] Ivanna K. Timotius, Iwan Setyawan, and Andreas A. Febrianto, “Face Recognition between Two Person using

Kernel Principal Component Analysis and Support Vector Machines”, International Journal on Electrical

Engineering and Informatics, 2(1), 2010.

[10] Hazim Kemal Ekenel and BülentSank ur, “Multiresolution Face Recogni tion”, Journal Home Page:

www.elsevier.com/locate/asoc ,© ElsevierB.V., 2004.

[11] Bernhard Scholkopf, Alexander Smola, Klaus Robert Muller, “Kernel Principal Component Analysis”, Max-

Planck-Institut f. biol. Kybernetik, Spemannstr, 38, 72076 Tubingen, Germany, 2 GMD FIRST, Rudower Chaussee

5, 12489 Berlin, Germany.

[12] Fabio Aiolli and Alessandro Sperduti, “Multiclass Classification with Multi-Prototype Support Vector

Machines”, Journal of Machine Learning Research, 6, 2005.

[13] Ignas Kukenysand Brendan McCane, “Support Vector Machines for Human Face Detection”, NZCSRSC’

Christchurch New Zealand 2008.

[14] RikFransens, Jan De Prins and Luc Van Gool, “SVM-based Nonparametric Discriminant Analysis, An

Application to Face Detection”, Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV

2003) 2, Set. 0-7695-1950-4/03 $17.00 © 2003 IEEE.

[15] JochenMaydt and Rainer Lienhart, “Face Detection with Support Vector Machines anda Very Large Set of

Linear Features”, ACM Multimedia '02, December 1-6, 2002.

Detection and Recognition of Human Faces Based on Hybrid Techniques

126

[16] Shreeja R. and Shalini Bhatia, “Facial Feature Extraction Using Statistical Quantities of Curve Coefficients”,

International Journal of Engineering Science and Technology, 2(10), 2010.

[17] Rowan Seymour, Darryl Stewart and JiMing, “Comparison of Image Transform-Based Features for Visual

Speech Recognition in Clean and Corrupted Videos”, EURASIP Journal on Image and Video Processing, Volume

2008.

[18] Ishrat Jahan Sumana, ”Image Retrieval Using Discrete Curvelet Transform”, Monash University, Australia,

November, 2008.

[19] JianhongXie, “Face Recognition Based on Curvelet Transform and LS-SVM”, Proceedings of the 2009 International

Symposium on Information Processing (ISIP'09), Huangshan, P.R. China, August 21-23, 2009, pp. 140-143.

[20] Min-Quan Jing and Ling-Hwei Chen, “Novel face-detection Method under Various Environments”, National

Chiao Tung University, Department of Computer Science, © 2009 IEEE.

[21] YongqiuTu, Faling Vi, Guohua Chen, Shizhong Jiang and ZhanpengHuang, “Fast Rotation Invariant Face

Detection in Color Image Using Multi-Classifier Combination Method”, 978-1-4244-5517-1/10/$26.00 © IEEE,

2010.

[22] Payman Moallema, Bibi Somayeh Mousavi, S. Amirhassan Monadjemi, “A Novel Fuzzy Rule Base System for

Pose Independent Faces Detection”, Journal Homepage: www.elsevier.com/locate/asoc, © ElsevierB.V., 2010.

[23] Niu Liping, Li Xin Yuan and Dou Yuqiang, “Bayesian Face Recognition Using Wavelet Transform”, 978-0-

7695-3752-8/09 $25.00 © 2009 IEEE.

[24] Mohammed Rziza, Mohamed El Aroussi, Mohammed El Hassouni, Sanaa Ghouzali and DrissAboutajdine,

“Local Curvelet Based Classification Using Linear Discriminant Analysis for Face Recognition”, International

Journal of Computer Science, 4(1), 2009.

[25] Dinesh Kumar, Shakti Kumar and C.S. Rai, “Feature Selection for Face Recognition: A Memetic Algorithmic

Approach”, Journal of Zhejiang University Science, June 10, 2009.