Conference PaperPDF Available

Parallel Processing for Multi Face Detection and Recognition

Authors:

Abstract and Figures

In this paper, a robust approach for real time face recognition where the images come from live video is proposed. To improve the algorithmic efficiency of face detection, we combine the eigenface method using Haar-like features to detect both of eyes and face, and Robert cross edge detector to locate the human face position. Robert Cross uses the integral image representation and simple rectangular features to eliminate the need of expensive calculation of multi-scale image pyramid. Moreover, In order to provide fast response in our system, we use Principal Component Analysis (PCA) to reduce the dimensionality of the training set, leaving only those features that are critical for face recognition. Eigendistance is used in face recognition to match the new face while it is projected on the face space. The matching is done when the variation difference between the new image and the stored image is below the threshold value. The experimental results demonstrate that the proposed scheme significantly improves the recognition performance. Overall, we find the system outperforms other techniques. Moreover, the proposed system can be used in different vision-based human computer interaction such as ATM, cell phone, intelligent buildings, etc.
Content may be subject to copyright.
Parallel Processing for Multi Face Detection and
Recognition
Varun Pande, Khaled Elleithy, Laiali Almazaydeh
Department of Computer Science and Engineering
University of Bridgeport
Bridgeport, CT 06604, USA
{vpande, elleithy, lalmazay}@bridgeport.edu
Abstract In this paper, a robust approach for real time face
recognition where the images come from live video is proposed.
To improve the algorithmic efficiency of face detection, we
combine the eigenface method using Haar-like features to detect
both of eyes and face, and Robert cross edge detector to locate the
human face position. Robert Cross uses the integral image
representation and simple rectangular features to eliminate the
need of expensive calculation of multi-scale image pyramid.
Moreover, In order to provide fast response in our system, we use
Principal Component Analysis (PCA) to reduce the
dimensionality of the training set, leaving only those features that
are critical for face recognition.
Eigendistance is used in face recognition to match the new face
while it is projected on the face space. The matching is done when
the variation difference between the new image and the stored
image is below the threshold value.
The experimental results demonstrate that the proposed scheme
significantly improves the recognition performance. Overall, we
find the system outperforms other techniques. Moreover, the
proposed system can be used in different vision-based human
computer interaction such as ATM, cell phone, intelligent
buildings, etc.
Keywords: Face Detection, Eyes detection, Face Recognition,
Haar- like features, PCA, Eigenfaces, Roberts Cross Edge Detector,
Human Computer Interaction, Real-time System.
I. INTRODUCTION
As the famous proverb says, “Face is the index of the
mind”. A face to face interaction between human beings is
considered most important and natural way to communicate.
Recognition of faces and processing the data is a challenging
task with very large database using low-cost desktop embedded
computing.
The face recognizing systems available so far maintain a
database that has each pre-processed human faces and their
corresponding unique features as determined from each face.
These features are stored with the respective individual’s face.
Therefore, when a query is raised, the unique features are
extracted from the face and then compared with the features in
the database and results in a perfect match [1].
There are many areas of applications that use face
recognition technology ranging from security applications for
recognizing criminals in public spaces such as airports, and
shopping centers, verifying access to private property, and
casting votes, to intelligent vision-based human computer
interaction such as ATM, cell phones and intelligent buildings
[2].
Identifying the images and processing the data is a
challenging task because of the various factors involved in this
sophisticated process such as illumination, angle of pose,
accessories, facial expression, and aging effects [2].
There are two sets of data involved in face recognition
system. The first is the training set of data that is used in the
learning stage. The second is testing set which is used during
recognition [3].
There are numerous technologies and algorithms used in
face recognition systems and the most popular among them is
the Eigenfaces algorithm which we have implement in our
system. Often real time response is understood to be in order of
milliseconds and sometimes microseconds, which is the most
crucial criteria in the system design.
The rest of this paper is organized as follows. In Section II,
we glance at a variety of face detection and recognition
methods. Section III contains an overview of the system,
including a description of the Eigenface with Haar-like
features, Robert cross edge detector, and details on the
analysis methodology of the paper. We describe the steps to
determine face detection and recognition. In Section IV, we
provide details of the experiment and the results of our system.
Finally, Section V concludes this paper regarding the potential
usefulness of our system, and highlights some directions for
future research.
II. R
ELATED WORK
Several methods have been suggested for face recognition
over the past few years and a recent survey could be found in
[4]. The most common techniques used in face recognition are
Principal Component Analysis (PCA), Partitioned Iterated
Function System (PIFS), Local Feature Analysis (LFA),
Wavelets and Discrete Cosine Transform (DCT), Neural
Network, Template Matching, and Model Matching. The
choice of using a particular method is specified by its
suitability for a specific application [5].
In face recognition by Template Matching [6], salient
regions of the facial image are extracted, and then these regions
are compared on a pixel-by-pixel basis with an image in the
database. The advantage of this method is that the image
preprocessing is simple, but the database search and
comparison are computationally expensive.
In face recognition by Neural Network [7] based on
learning of the faces in the training phase, the learning set of
faces should be large enough in number to realize the variations
in real life situations. Neural Network solutions model the face
recognition problem very well, but they take significant
training time.
In Local Feature Analysis (LFA) [8] technique, dozens of
features from different regions of the face and the relative
location of these features are utilized and incorporated to
identify and verify the face image. Although LFA method
offers robustness in carrying out a match with local variations
on the facial image, the technique is not robust against global
facial attributes.
In [9] Hidden Markov Models (HMM) and Wavelets for
face recognition are used, and during the learning model the
best matching model, offers a query image. Success of Model
Matching methods depends mainly on building realistic
representative model.
Eigenface [2] method is one of the well-known face
detection and face recognition algorithms. In Eigen
presentation phase, every face in the database can be
represented as a vector of weights, and then PCA is used to
encode face images and capture face features. The face
recognition is done by locating the images in the database
whose weights are the closest in Euclidean distance to the
weights of the test images. Automatic learning and later face
recognition is practical within Eigenface scheme, and it has
advantages over other face recognition algorithms in making
the application practical for its simplicity, speed, learning
capacity, and insensitivity even to progressive change in the
face e images.
In our proposed methodology we use a multi algorithm
combining PCA with eigenface method using Haar-like
features and Robert cross edge detector applied on the same
facial data to decide the identity of a subject. In this paper, we
present a prototype system implementing our technique of face
recognition.
III. N
EW APPROACH
The main target of this research is to build a real-time
system that could be used in real-world environment, where
many technical systems require natural human-computer
interfaces using different kind of cameras installed into
everyday living and working environment.
As facial recognition is not possible if the face is not
detected and isolated from the background as the first step to be
processed for face recognition, our approach consists of two
stages. The first stage is to locate the faces in the image, using
the face detector to examine the image location which comes
from a live video. The second stage after analysis of facial
image is to declare a possible match face which is close enough
from the faces previously stored in the database for the new
face. The whole algorithmic architecture is shown in Figure 1.
A. Detection Process
Since face recognition algorithms are very sensitive to
different parameters such as lighting conditions, facial emotion
(angry, smiling, etc.), hair and makeup, it is extremely
important to pre-process detected faces before applying face
recognition.
Two face detection systems were trained: one with Haar-
like feature set of Viola and Jones [10] in which the basic Haar
features for the eyes and the face are added to Eigenface
method , where is the detection is very fast and one with the
Robert cross edge detector in which face edges are detected
and added to the database.
The Eigenface detection technique is based on PCA [1],
which considers as a best dimensionality reduction tool that
helps to reduce the data set to a smaller one. In this approach,
we preserve as much information as possible in the mean
square sense. Suppose that we have the set of faces , and these
faces is a subspace of the set of images, and it is K
dimensional, it is possible to find the best subspace using PCA
when you fit a hyper plane to the set of the faces.
PCA with computing Eigen faces is considered as first step
to process the image database, i.e. store the set of images with
labels.
After given a collection of n labeled training images, the
Eigenfaces modeling works as follows [11]:
Step 1: Each image Ii in the training set is transformed into a
column vector
.i
, and placed into the set.
},....,,,{ 321 MS
(1)
Step 2: Compute the mean image
, which equal to the
average image face Vector.
M
i
i
M
1
1
(2)
Where
M
is the number of the face images and
is the face
images vector.
Step 3: Find the difference
between the input image and the
mean image.
ii
(3)
Step 4: Find the Covariance Matrix by
M
i
i
M
C
1
T
i
1
(4)
Step 5: Compute the eigenvectors and eigenvalues of
C
.
Figure 1: The Framework for Multi face detection and recognition
Step 6: The
'
M
significant eigenvectors are chosen as those
with the largest corresponding eigenvalues.
Step 7: Project all the face images into these eigenvectors and
form the feature vectors of each face image.
Since our design target is to build a real time system, we
emphasize achieving very robust face detection rates.
According to [10], Haar-like features are simple and
inexpensive image features. A special representation of the
image called integral image makes feature extraction faster.
Using the integral image representation, it is possible to
compute the value of any rectangular sum of rectangular
features instead of pixels. Therefore, image scaling is not
necessary, and it is replaced by scale the rectangular features.
Since some parts of human face are more important than
other parts to successful face recognition, in addition to face
detection, we use Haar-like features to detect the eyes too, in
order to recognize the occluded face. Figure 2 shows the
detected eyes and detected face in the image, and display the
image with detected face outlined in green, and detected eyes
outlined in blue.
Once the Eignface process strats, Robert Cross edge
detection [12] is working simultaneously as a parallel
processing.
There are many methods to perform edge detection, one of
the main categories is gradient method, which detects the edges
by looking for the maximum and minimum values in the first
derivative of the image. In order to detect the edges in Robert
Cross technique, 2-D spatial gradient measurement on an
image is performed, where the regions of high spatial
frequency which greatly correspond to edges is highlighted.
Figure 2 shows the input image after edge detection using
Robert Cross, and the results in grayscale edge segmented
image is shown in Figure 3.
B. Face Recognition
Once the Eigenfaces are created in the first phase, we can
project any of a test face images into Eigenspace [13]. Then
Figure 2. Edge detected image using Robert Cross operator
Figure 3. Gray Scale image using Robert Cross operator
the classification can be made by a simple Euclidean distance
measure between the projected vector and each face image
feature vector. Euclidean distance formula can be found in
[14]. After applying Euclidean distance measure, the closest
face in Eigenspace is selected to recognize the test face image
as the match if the distance to that face is above athreshold,
otherwise, classify the face as an “unknown” face. Following
this methodology we can classify the image as known face
image, and unknown face image.
IV. EXPERIMENT AND RESULTS
The complete face recognition system is implemented in
Visual C# in MS-Windows platform. A standard web-camera
with 60/sec is used to capture video frames. The experimental
results were conducted on a Pentium 2.4GHz computer
system.
Figure 4 shows the interface used in our system. First the
faces in an input image must be located and registered to the
database, meaning enter the names of the people in the image
manually. Then, once we have detected a face, later if the
same face is detected again, the system will show how many
faces in the image and for whom those faces are, since face
recognition means figuring out whose face it is.
We conducted our experiments for people in different
places using live video. The video includes different
backgrounds. It is found that the system successfully detects
almost all the faces and the eyes in the images.
Figures 4, 5 respectively show recognized occluded face
with hand, and multiple recognized faces.
To evaluate the performance of our proposed face recognition
system, a number of self-captured face images were used in
the experiment. We measure the recognition rate for various
face expressions and poses. The recognition rate is the ratio of
successful attempts (cases where the best match is the correct
match) to the total attempts [5].
The results show that the correction rate is 92%. This
makes our multi-algorithmic technique very much suitable for
many applications.
Figure 4. Recognized occluded face with the hand
Figure 5. Multiple recognized faces
V. CONCLUSIONS AND FUTURE WORKS
Face recognition can be used in many applications such as
ATM, surveillance, video conferencing, etc. In this work, we
propose a new technique that relies on an effective combination
of multi algorithms. The approach is fast, easy, and it provides
a practical solution to the recognition problem. The system has
produced a success rate of 92% over a large set of database of
faces.
As a future work, we plan to extend our work by
implementing our technique using wireless multimedia sensor
network boards using IMB400 multimedia boards.
R
EFERENCES
[1] K. Lin, K. Lam, X. Xie and W. Siu,“ An Efficient Human Face Indexing
Scheme Using Eigenfaces,” in
Proceedings of the 2003 IEEE
International Conference on Neural Networks and Signal Processing,
pp. 920-923
, Dec. 2003.
[2] M. Turk and A. Pentland,“Eigenfaces for Recognition,”
Journal of
Cognitive Neuroscience, vol. 3, no. 1,pp. 71-86,
1991.
[3] F. Chichizola, L. Giusti, A. Giusti and M. Naiouf,”Face Recognition:
Reduced Image Eigenfaces Method,” in
Proceedings of the 47
th
International Symposium ELMAR, pp. 159-162,
Jun. 2005.
[4] R. Jafri and H. Arabnia, ”A Survey of Face Recognition Techniques,”
Journal of Information Processing Systems, vol. 5, no. 2,pp. 41-68, June
2009.
[5] S. Kar, S. Hiremath, D. Joshi, V. Chadda and A. Bajpai, A Multi-
Algorithmic Face Recognition System,” in
Proceedings of the
International Conference on Advanced Computing and Communications
(ADCOM), pp. 321-326,
Dec 2006.
[6] R. Brunelli and T. Poggio, “Face Recognition: Features versus
Templates,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 15, no. 10, pp. 1042-1052, Oct 1993.
[7] D. Bryliuk and V. Starovoitov, “Access Control by Face Recognition
using Neural Networks and Negative Examples,” in
Proceedings of 2
nd
International Conference on Artificial Intelligence, Crimea, Ukraine, pp.
428-436,
Sept 2002.
[8] P. Penev and J. Atick, “Local Feature Analysis : A general statistical
theory for object representation,”
Computational Neuroscience
Laboratory, The Rockefeller University, USA.
[9] M. Bieego, U. Castellani and V. Murino,”Using Hidden Markov Models
and Wavalets for face recognition,”
in Proceedings of the 12
th
IEEE
International Conference on Image Analysis and Processing (ICIAP’03),
pp. 52-56,
Sept 2003.
[10] P. Viola and M. Jones,” Robust Real-time Object Detection,”
in Second
International Workshop On Statistical And Computational Theories Of
Vision-Modeling, Learning, Computing, And Sampling, Vancouver,
Canada
, July 2001.
[11] A. Khan and L. Alizai,”Introduction to Face Detection Using
Eigenfaces,”
in Proceedings of the 2
nd
IEEE International Conference on
Emerging Technologies (ICET 2006), pp. 128-132
, Nov 2006.
[12] H. Lakshmi and S. Patilkulakarni,”Segmentation Algorithm for Multiple
Face Detection in Color Images with Skin Tone Regions using Color
Spaces and Edge Detection Techniques,”
International Journal of
Computer Theory and Engineering, vol. 2, no. 4,pp. 552-558,
Aug 2010.
[13] Md. Monwar, P. Polash, Md. Islam and S. Rezaei,”A real-Time Face
Recognition Approach From Video Sequence Using Skin Color Model
And Eigenface Method,”
in Candian Conference on Electrical and
Computer Engineering (CCECE 2006), pp. 2181-2185
, May 2006.
[14] D. Pissarenko,”Eigenface-based facial Recognition,” Dec 12002.
... It ranges from static matching of photos on voter ID card, passport, credit card etc. to real time matching in the area of surveillance. Numerous Face Recognition Techniques (FRT) have been used in the literature [4,8,15,17,20].The foremost step in the automated FRT is the successful detection of the face from the given image. Various challenges are associated with face detection like lighting conditions, pose orientation, and complex background [4].In FRT, feature extraction helps in face recognition, which in turn helps human computer interaction (HCI) [17]. ...
Article
Face recognition integrated with emotions and behavior can be a salient feature for identification of antisocial/criminal people. In this paper, a hybrid Fuzzy based Behavior Prognostic System for Disparate traits (FBPSDs) has been proposed. FBPSDs can be used in an emergency situation to identify and extract mischievous person(s) involved in any swindling act. The technique uses face recognition to identify the contextual faces. Further, fuzzy rules have been formulated to extricate more features for the culprit identification. The proposed framework has three stages: face detection, feature extraction and behavior analysis. Face detection for the proposed FBPSDs is further implemented using three techniques Haar Cascade, Skin color based and HOG methods on Labeled Faces in the Wild (LFW) and Facial dataset (Face 94, Face 95 and Face 96). Testing accuracy of 91%, 50%, 92% (LFW) and 91%, 60%, 91.5% (facial dataset) is achieved. Result analysis and evaluation prove that HOG method is the most acceptable and appropriate method for face detection.
... Pande ve diğerleri, yüz için özellik çıkarımı sırasında ve bir kenar bulma yaklaşımı olarak Robert kenar çıkarımı işleminde paralel olarak hesaplama yapmayı öngörmüşlerdir. Bir uygulama iskeleti (framework) olarak sundukları yaklaşımlarında %92 yüz tanıma başarı oranı sunulmuştur [10]. Bir diğer çalışma da yüzün algılanması ve tanımlamasının yapılması işlemlerinin paralel olarak yapılabileceğini öngörmektedir. ...
Conference Paper
Full-text available
Yüz tanıma, güncel teknolojilerde sıklıkla karşılaşılan uygulamalardandır. Bu gibi görüntü işlemeye yönelik hesaplamalar bazı durumlarda yüksek başarım ister. Görüntü işleme uygulamalarının yaygınlaşması teknoloji ile doğru orantılı olarak gerçekleştiği için gelişen donanımlar, çok çekirdekli mimariler, çok basamaklı boru hattı yapıları da uygulama geliştirme yönünde görüntü işleme çalışmalarına ivme kazandırmaktadır. Bu çalışmada yüz tanımada kullanılması öngörülen bir yöntemin paralel hesaplama ile böl ve yönet mantığına dayanarak ele alınması hedeflenmiştir. Böylece algoritmanın çok çekirdekli mimarilerde yüksek başarım için kullanımı söz konusu olacaktır. Gri seviyesi ile temsil edilen bir resmin benzer yapıdaki pek çok görüntünün bulunduğu bir veri tabanında taranması verilerin yoğun olduğu durumlarda oldukça yüksek işlem yükü gerektirir. Parçalara ayrılarak çekirdeklere dağıtılan resim paralel hesaplama yaklaşımı ile işlenerek işlem yükünün azaltılması sağlanmaktadır. Sistem erişimlerinde kullanılan biyometrik yüz verileri yaklaşık olarak standart bir şablona sahiptir. Bu şablona oturtulan vesikalık resim üzerinde belli alanlar işe yaramayan yani yüz tanımada gerekli özellik çıkarımı için kritik olmayan veriler içerebilir; kalan pikseller ise yüz tanımada önemlidir. İşlem yükü gerektiren hesaplamalarda bu kullanılan pikseller, resmin farklı dilimlere ayrılması ile çeşitli seçeneklerde dağılım gösterebilir. Yatay, dikey veya ortadan ayırarak oluşturulan bölmelendirmenin yüksek başarım için hangisinin seçileceği ve hangi çekirdeğe gönderileceği ile ilgili algoritma bu çalışma kapsamında anlatılmıştır.
Thesis
Full-text available
In this thesis, e-passport system compliant biometric system modelling has been achieved by using fingerprint and face multibiometrics. Feature extraction for fingerprint data has been done by using a new method, Angle Invariant Fingerprint Matching (AIFM). The method proposed for face recognition has been developed by considering International Civil Aviation Organization (ICAO) standards compatible passport pictures. This method namely Relational Bit Operator (RBO) is used during feature extraction. While template has been being extracted via RBO from face pictures, a parallel algorithm named as slice, process, merge similar to divide and conquer approach is run on the multicore architecture by using load balancing. Biometric templates are cryptographically embedded into the QR codes. Encryption key is hidden into the QR image via steganography. Compressed QR is transferred into the matcher module in a secure way for decision level fusion of fingerprint and face templates. Thesis has been completed with analysis and tests by considering logic level hardware design appropriateness of every proposed method in each step for future studies.
Article
Full-text available
A Multilayer Perceptron Neural Network (NN) is considered for access control based on face image recognition. We studied robustness of NN classifiers with respect to the False Acceptance and False Rejection errors. A new thresholding approach for rejection of unauthorized persons is proposed. Ensembles of NN with different architectures were studied too. Advantages of the ensembles are shown, and the best architecture parameters are given. The usage of negative examples was explored. We have shown that by using negative examples we can improve performance for access control task. The explored NN architectures may be used in real-time applications.
Article
Full-text available
Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.
Conference Paper
Full-text available
This paper describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the "Integral Image" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features and yields extremely efficient classifiers (6). The third contribution is a method for combining classifiers in a "cascade" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. A set of experiments in the domain of face detection are presented. The system yields face detection performace comparable to the best previous systems (18, 13, 16, 12, 1). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.
Article
We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.
Conference Paper
The security of information and physical property has been one of the problems of modern days. Face recognition has gained importance in security and information access. In this paper appearance based algorithm eigenfaces are explained in detail. PCA is used in extracting the relevant information in human faces. In this method the eigenvectors of the set of training images are calculated which defines the face space. Face images are projected on to the face space which encodes the variation among known face images. These encoded variations are later used for recognition process. The new face to be matched is projected on the face space, which generates meaningful variations matching is done when variation difference between new image and training images is below some threshold value
Conference Paper
We present a robust approach for real time face recognition from video sequences. An automatic face detector is employed which uses skin color modeling to detect human face in the video sequence. The presence or absence of face in each region is verified by means of height-width proportion and an eye detector: based on efficient template matching scheme. The obtained face images are then projected onto a feature space, defined by eigenfaces, to produce the biometric template. Recognition is performed by projecting a new image onto the feature spaces spanned by the eigenfaces and then classifying the face by comparing its position in the feature spaces with the positions of known individuals. The proposed system can be used in security purposes and in any visual communication system, such as teleconferencing, human computer interaction etc
Conference Paper
The importance of utilising biometrics to establish personal authenticity and to detect impostors is growing in the present scenario of global security concern. Development of a biometric system for personal identification, which fulfills the requirements for access control of secured areas and other applications like identity validation for social welfare, crime detection, ATM access, computer security, etc. is felt to be the need of the day. Face recognition has been evolving as a convenient biometric mode for human authentication for more than last two decades. Several vendors around the world claim the successful working of their face recognition systems. However, the Face Recognition Vendor Test (FRVT) conducted by the National Institute of Standards and Technology (NISI), USA, indicates that the commercial face recognition systems do not perform up to the mark under the variations ubiquitously present in a real-life situation. Availability of a largely accepted robust face recognition system has proved elusive so far. Keeping in view the importance of indigenous development of biometric systems to cater to the requirements at BARC and elsewhere in the country, the work was started on the development of a face-based biometric authentication system. In this paper, we discuss our efforts in developing a face recognition system that functions successfully under a reasonably constrained set-up for facial image acquisition. The prototype system built in our lab finds facial match by utilizing multi-algorithmic multi-biometric technique, combining gray level statistical correlation method with Principal Component Analysis (PCA) or Discrete Cosine Transform (DCT) techniques in order to boost our system performance. After automatic detection of the face in the image and its gross scale correction, its PCA and DCT signatures are extracted. Based on a comparison of the extracted signature with the set of references, the set of top five hits are selected. Exact scale of t- he face is ascertained w.r.t. each of these hits by first locating the eyes employing template matching technique and then finding the inter-ocular distance. After interpolating the face to the exact scale, matching scores are computed based on gray level correlation of a number of features on the face. Final identification decision is taken amongst this set of five faces, depending on the highest score. We have tested the technique on a set of 109 images belonging to 43 subjects, both male and female. The result on this image-set indicates 89% success rate of our technique.