ArticlePDF Available

Face Recognition Based on Facial Features

Authors:
  • Mirpur University of Science and Technology (MUST), Mirpur, AJ&K

Abstract and Figures

Commencing from the last decade several different methods have been planned and developed in the prospect of face recognition that is one of the chief stimulating zone in the area of image processing. Face recognitions processes have various applications in the prospect of security systems and crime investigation systems. The study is basically comprised of three phases, i.e., face detection, facial features extraction and face recognition. The first phase is the face detection process where region of interest i.e., features region is extracted. The 2 nd phase is features extraction. Here face features i.e., eyes, nose and lips are extracted out commencing the extracted face area. The last module is the face recognition phase which makes use of the extracted left eye for the recognition purpose by combining features of Eigenfeatures and Fisherfeatures.
Content may be subject to copyright.
Research Journal of Applied Sciences, Engineering and Technology 4(17): 2879-2886, 2012
ISSN: 2040-7467
© Maxwell Scientific Organization, 2012
Submitted: November 25, 2011 Accepted: January 13, 2012 Published: September 01, 2012
Corresponding Author: Muhammad Sharif, Department of Computer Sciences, COMSATS Institute of Information Technology
Wah Cantt., 47040, Pakistan, Tel.: +923005788998
2879
Face Recognition Based on Facial Features
1Muhammad Sharif, 2Muhammad Younas Javed and 1Sajjad Mohsin
1Department of Computer Sciences, COMSATS Institute of Information Technology Wah Cantt.,
47040, Pakistan
2Department of Computer Engineering, National University of Science and Technology,
Peshawar Road, Rawalpindi, 46000, Pakistan
Abstract: Commencing from the last decade several different methods have been planned and developed in
the prospect of face recognition that is one of the chief stimulating zone in the area of image processing. Face
recognitions processes have various applications in the prospect of security systems and crime investigation
systems. The study is basically comprised of three phases, i.e., face detection, facial features extraction and face
recognition. The first phase is the face detection process where region of interest i.e., features region is
extracted. The 2nd phase is features extraction. Here face features i.e., eyes, nose and lips are extracted out
commencing the extracted face area. The last module is the face recognition phase which makes use of the
extracted left eye for the recognition purpose by combining features of Eigenfeatures and Fisherfeatures.
Keywords: Detection, eigenfeatures, face, fisherfeatures, recognition, segmentation
INTRODUCTION
Various operations make use of computer based
procedures and methods in order to execute processing on
digital images. These operations are comprised of
methods such as image reconstruction, enhancement,
compression, registration, rendering, analysis,
segmentation, feature extraction, face detection and
recognition, respectively all these processes are
component of a huge field called digital image processing.
Face recognition is an enormous field comprised of
huge processes of investigation and examination based
methods. Each year the attempted solutions grow in
complexity and execution times. If we examine the history
of face recognition filed then we can see that the work on
this field has its initialization roots in the times almost 35
years back from the present time. Most demanding field
in the image processing is face detection and recognition
system, numerous methods and techniques have been
proposed and developed in this regard, but still there is a
huge room for new and effective work. If we talk in
general about the face recognition process then we can
say that it is a process comprised of examining a person's
face and then relating and matching the scanned face next
to a data base consisting of acknowledged faces. It
identifies individuals by features (Hiremath et al., 2007).
Pattern recognition is a dynamic subject in prospect
of human face recognition. It has huge range of
applications such as recognition, driving certification and
identification portrait confirmation, check organism of
banks, mechanical safety methods, image dispensation,
visualization centred human device communication, face
detection and recognition and in the prospect of
entertainment.
There exist two main types in regard of human face
identification centred on holist or examination of facial
features. One type makes use of all the characteristics of
a pattern or facial features. The main methods under this
field are neural network based face recognition (Khanfir
and Jemaa, 2006), elastic graph match method for face
recognition (Wiskott et al., 1999), Eigenface (Turk and
Pentland, 1991), fisher faces and face recognition through
density-isoline examination face equivalent. The second
type in this regard works on specific features extracted
from the face. The main methods under this type include
face recognition through facial feature extraction,
template matching approach (Hsu et al., 2002) and
boundary tracing approach. Face recognition base on
human face geometry was also an area under
consideration (Basavaraj and Nagraj, 2006).
Face recognition in many techniques seems like three
phase process i.e., face detection, features extraction and
lastly the face recognition phase (Zhao et al., 2003).
In this study face detection, features extraction and
recognition technique is presented which detects the face
from the frontal view face images based on skin colour
information. After that face region containing the face is
extracted. After having the detected face, the eyes are
Res. J. App. Sci. Eng. Technol., 4(17): 2879-2886, 2012
2880
Input image
Pre-
processing
Skin segmentation
Detected face
Eyes detection
Eye map
Detected eyes
Left eye Right eye
Classes
Nose extraction Lips extraction
Luma map Chroma map
Im age resize Ycbcr normalization
Eigenvalues
Database
Recognized person
extracted using eye maps. Then nose and lips are
extracted on the basis of distance between the two eyes.
After the features extraction, a small face region is picked
in order to improve the computational time. The picked
region is the left eye from the extracted features. For the
recognition purpose combination of Eigenfeatures and
Fisherfeatures is used.
LITERATURE REVIEW
Existing work: There are various diverse methods
proposed in the perspective of face recognition. Huge
work has been done in this area and many different
methods had been used for the face detection and
recognition purpose. These methods are ranging from
Eigen face approach (Turk and Pentland, 1991), Gabor
wavelets (Duc et al., 1999; Zhang et al., 2005) to 2D
discrete cosine transform for facial features extraction
(Eickler et al., 2000), geometrical approach (Hiremath
and Ajit, 2004; J'and Miroslav, 2008). PCA based facial
features extraction (BELHUMEUR et al., 1997 a, b) has
a drawback that the method is sensitive to the illumination
changes. Few most commonly used methods are PCA
(Turk and Pentland, 1991), LDA (Belhumeur et al., 1997
a, b), ICA (Bartlett et al., 2002), kernel methods, Gabor
wavelets (Yousra and Sana, 2008; Zhang et al., 2004),
neural networks etc. Many researchers used the facial
features for the recognition purpose. Some extracted the
face features values in the form of Eigen values or Fisher
values and some used the features of face i.e., eyes, nose
and lips to recognize the face images.
Accurate facial features extraction is important for
accurate face recognition. Features are extracted in many
different ways that provide different types of issues to be
considered. An earlier approach to face features extraction
was proposed by Yuille et al. (1989). He mainly used
templates of deformable parameters to extract eyes and
mouth. But the method was computationally expensive
and accuracy was not guaranteed.
An existing method that extracts facial features and
performs recognition based on these extracted features is
developed by Wiskott et al. (1999). In this technique a
feature extraction (iris, mouth) and face recognition
approach is presented. Iris and mouth are extracted and
the template matching approach is used for the
recognition purpose. The algorithm first locates the face
area by means of skin colour subsequent to the extraction
of applicants of iris centred calculated costs. After that the
mouth region is extracted by making use of colour space
called RGB. After that associated component process is
utilized to accurately extract out the lips region. In order
to assure that detected lips are mouth, mouth corner points
are extracted using a method called SUSAN approach.
Lastly face identification is carried out using template
matching process.
Fig. 1: System block diagram
Fig. 2: Input image
In proposed work, both methods were used for the
recognition purpose. The features of the face are detected
commencing the detected face and then left eye is picked
whose Eigen and Fisher values are calculated for the
recognition purpose.
MATERIALS AND METHODS
The proposed technique is basically face detection
features extraction and recognition technique. The system
contains there main phases which are:
CFace Detection
CFeatures Extraction
CFace Recognition
The basic working of the system is as follows (Fig. 1):
Res. J. App. Sci. Eng. Technol., 4(17): 2879-2886, 2012
2881
Face detection: The face detection phase further
involves2 steps which are:
CPre-processing
CImage resize
CYcbcr conversion and normalization
CSkin Segmentation
CFace detection
Pre-processing: First an input image is presented to the
system Fig. 2 represents an example input image.
Image resizes: When the method is offered with an input
image the image is resized to 200×180 pixels in order to
apply further processing.
Ycbcr conversion and normalization: The objective of
this step is to compensate the lightening effects in the
input image. The basic purpose is to attenuate noise from
the image; this step works by converting the image to
YCbCr colour space and finding the maximum and
minimum values of luminance Y. Next average value of
Y is calculated. Then according to that average value the
values of red, green and blue components are adjusted.
The process works as:
Y = Ycbcr conversion of the input image.
Y = 0.25 R+0.504 G+0.098B+16 (1)
Cb = 0.439 R-0.368 G-0.071B+128 (2)
Cr = 0.148 R-0.291+0.439B+128 (3)
First maximum and minimum values of Y are
calculated using:
Ymin = min(min(Y) (4)
Ymax = max(max(Y) (5)
where Y=Ycbcr conversion of the input image.
After calculating the values, the subsequent step is to
normalize the value of Y (luminance) which is done by:
Y = 255.0*(Y-Ymin)/(Ymax-Ymin)(6)
After that the average value of Y is calculated that is
used for lightning compensation purpose:
Yavg = sum (sum(Y))/(W*H) (7)
where,
W = Width of the input image and
H = Height of the input image
Input image resized YCbCr conversion
Fig. 3: Preprocessing stage
Fig. 4: Extracted skin
Fig. 5: Detected face region
The output of this step is shown in Fig. 3:
Skin segmentation: In the next step the image after
preprocessing is used as input and skin region is extracted
using the YCbCr colour space. This process basically
utilizes the skin colour in order to separate the face
region.
For this process the value of Cr (red difference
chroma component) is fixed in a range and only pixels in
that particular range are picked as skin pixels of the face.
The range defined here is 10<Cr & Cr<45. Figure 4
represents the segmented skin region.
The process of skin extraction works as follows:
(8)
() ()
Ri Ci
i
j
=
=
1
1,
where,
j = Length(R,C)
R = Skinindexrow
C = Skinindexcolumn
Res. J. App. Sci. Eng. Technol., 4(17): 2879-2886, 2012
2882
Face detection: After having the skin region the face is
isolated from that extracted skin region. Extra information
is eliminated to obtain the region of concern i.e. the facial
features region. Figure 5 represents the segmented face
region.
Features extraction phase: The second phase is the
features extraction phase. This phase receives the detected
face as the input. Once the face is detected from the
image, subsequently step is to find the features from the
face region being detected. The extracted features are:
CEyes
CNose
CLips
First of all eyes are detected using chrome and
luminance maps Fig 6 and 7. After eyes nose and lips are
extracted using a formula centered on the space existing
among both eyes.
For the features extraction purpose 7 points are
extracted resting on the face which are:
CLeft eye center
CRight eye center
CEyes midpoint
CNose tip
CLips center
CLips left corner
CLips right corner
Eye extraction: To extract the eyes from a human face
the information of dark pixels on the face are needed. As
the eyes are different from the skin colour, so using a
colour space and separating the colours could be a good
way to locate the eyes. The YCbCr colour space provides
good information about the dark and light pixels present
in an image. The colour space called YCbCr has the
tendency to demonstrate the eye area with a high Cb and
low Cr values. To detect the eye region 2 eye maps are
constructed in the YCbCr colour space which show the
pixels with high Cb and Low Cr values for the eye region
Fig. 6.
ChromaMap = 1/3 (Cb /Cr + Cb^2 + (1-Cr)^2) (9)
The luma chart is built by applying the morphological
(Sawangsri et al., 2004) operations of erosion-dilation on
the luminance component of the YCbCr image. These
operations are applied to enhance the brighter and the
darker pixels around the eye region in the luminance
component of the image.
LumaMap = Ydialte / (Yerode -1) (10)
Fig. 6: Chromamap
Fig. 7: Lumamap
Fig. 8: Eyes extraction process
The Y image is dilated and eroded by a structuring
element. Then both ChromaMap and LumaMap (Fig. 7)
are added together to make a Map. This map is further
dilated to brighten the eye pixels. The eye region is
detected by applying thresholding and erosion on the
Map.
The detected eye region is then marked and the eye
is extracted from the image. Figure 8 represents the whole
extraction process.
Nose and lips detection: The nose and lips are extracted
on the basis of distance between the two eyes. The nose
and lips are extracted by assuming that they lie at a
specific ratio of the distance between the two eyes.
Nose detection: For nose extraction the distance between
the two eyes is calculated as:
Res. J. App. Sci. Eng. Technol., 4(17): 2879-2886, 2012
2883
D = (11)
( )
()
LR LR
xx yy
−+
22
where,
D = Distance between the center points of two eyes
Lx = Left eye x coordinate
Rx = Right eye x coordinate
Ly = Left eye y coordinate
Ry = Right eye y coordinate
After having the eyes distance a formula is generated
on the basis of that distance to extract the nose. It is
assumed that nose lies at a distance of 0.55 of the eyes
distance. This is done as:
N = (My+D)*0.55 (12)
where,
N = Nose tip point
My = Eyes midpoint
D = Distance between the center points of two eyes
Lips extraction: For lips extraction the three lips points
extracted are:
CLips center point
CLips left corner point
CLips right corner point
For lips extraction the distance between the two eyes is
calculated as:
D = (13)
( )
()
LR LR
xx yy
−+
22
where,
D = Distance between the center points of two eyes
Lx = Left eye x coordinate
Rx = Right eye x coordinate
Ly = Left eye y coordinate
Ry = Right eye y coordinate
After having the eyes distance a formula is generated
on the basis of that distance to extract the lips. For the lips
center point, it is assumed that lips lie at a distance of 0.78
of the eyes distance. This is done as:
L = (My+D)*0.78 (14)
Detected Face Detected Eyes Extracted Features
Fig. 9: Features extraction
Fig. 10: Output of features extractionphase
where,
L = Center point of lips
My = The midpoint of the space among both eyes
D = Space among the center points of two eyes
Now for lips right and left corner point’s extraction
it is analyzed that the lips corners are almost located at a
distance 0.78 of the eyes center points. These points are
extracted as:
Ll = (Ly+D)*0.78 (15)
where,
Ll = Left corner point of lips
Ly = Left eye y coordinate
D = Distance between the center points of two eyes
Similarly the right corner point of mouth is extracted as:
Lr = (Ry+D)*0.78 (16)
where,
Lr = Right corner point of mouth
Ry = Right eye y coordinate
D = Distance between the center points of two eyes
The output of this phase can be seen in Fig. 9 and Fig. 10:
Res. J. App. Sci. Eng. Technol., 4(17): 2879-2886, 2012
2884
Percentage (%)
0
0.5
1.0
1.5
2.0
2.5
Eyes Mouth
Features
Detection time existing
Detection time proposed Recognition time existing
Recognition time proposed
Time (sec)
0
0.5
1.0
1.5
2.0
2.5
Eyes Mouth
Features
Detection time existing
Detection time proposed Recognition time existing
Recognition time proposed
Face recognition phase: The last phase of this system is
the face recognition phase. The phase makes use of two
algorithms for the recognition purpose which are:
CEigenfaces
CFisherFaces
The recognition stage makes use of extracted eyes for
the recognition process. Here a small portion of the face
is chosen to perform the recognition so that it can save the
computational time and to increase the accuracy. For this
purpose the chosen part from the face is the extracted left
eye in order to recognize individuals.
The proposed approach to face recognition takes into
account the fact that faces can be recognized using a
smaller portion of the face. For this purpose the region of
left eye is picked and a face eye space is defined. The eye
area is professed through the Eigenfaces that are the
Eigen-vectors of set of eyes of different individuals.
Most of the earlier exertion in the prospect of face
identification using Eigenfaces focus on the fact that the
faces are recognized by calculating the Eigenfaces of set
of face features i.e., the whole face was considered and
used for computing the Eigenvalues of different
individuals. The proposed approach focuses only on the
smaller region pf the face to recognize different
individuals. Due to which the computational cost and time
is efficient and reliable.
Calculating the eigenvalues: As there is a figure of
diverse persons in a library called database, there is
ensemble of different individual images creating different
points in the eye space. The eye images that are much
alike cannot be randomly distributed in the eye space
because the purpose is to differentiate different
individuals despite the fact that they may have features
that are quite alike. So in order to describe such images a
slight measurement area is needed. The chief objective of
PCA is to discover such vectors that can effectively
distribute the images in the whole area. These vectors
basically describe the eye space of different individuals.
Creation of Eigenvalues is the main thing in this
process. Once these are created the next step seems like a
pattern recognition process. There are M different training
images of eyes of different individuals whose Eigenvalues
are created and are distributed in the eye space. We have
to pick out M1 Eigenvectors which have largest
Eigenvalues associated with them. The process of
choosing these Eigenvectors is entirely dependent on the
Eigenvalues being calculated of different individuals.
Whenever an input image is received, its Eigenvalue
components are computed. A weight vector is used which
is used to define the involvement of every Eigenface in
the process of demonstrating the image to be judged.
Table 1: Features extraction rate comparison
Detection rate (%) Processing time (Sec)
------------------------------ ---------------------------------
Existing Existing
Facial (Yuen et al., (Yuen et al.,
features 2010) Proposed 2010) Proposed
Eyes 93.08 99 2.04 0.592
Mouth 95.83 99 0.46 0.712
Fig. 11: Features and recognition rate comparison through bar
chartse
Fig. 12 : Processing time comparison
The main purpose is to determine which included
class most closely resembles to the input image. The
concept of classes of Fisherfaces is used to differentiate
different individuals. For this purpose classes equal to the
figure of diverse persons present inside the database being
used with the system. The best suited class to the input
image is selected centered on the minimum Euclidian
distance of the class from the input image.
RESULTS AND DISCUSSION
In direction to check the performance, the projected
system the database is tested against three databases
which are:
CSelf-selected AR database
CSelf-selected CVL database
CDatabase containing 300 images of different
individuals with illumination change
Res. J. App. Sci. Eng. Technol., 4(17): 2879-2886, 2012
2885
Percentage (%)
0
20
40
60
80
100
Eyes
Features
Detection rate existing
Detection rate proposed Recognition rate existing
Recognition rate proposed
Table 2: Recognition rate comparison
Detection rate (%) Processing time (Sec)
------------------------------- ---------------------------------
Existing Existing
Facial (Yuen et al., (Yuen et al.,
features 2010) Proposed 2010) Proposed
Eyes 79.17 97 0.24 0.1754
Mouth 51.39 75 0.10 0.194
Table 3: Results on 72 images tested with nose of individuals
Feature Detection rate (%)
Nose 98
Table 4: Results on 72 images tested with right eye of individuals
Feature Recognition rate (%)
Right eye 96
Table 5: Features results + processing time comparison
Detection rate (%)
-----------------------------------------------------------
Facial features Existing (Yuen et al., 2010) Proposed
Eyes 93.08 98
Mouth 95.83 97
The results are compared with the existing technique
results proposed by Yuen et al. (2010) which is based on
template matching.
Results comparison:
Results on the self-selected AR database: For the
comparison purpose system is tested with the same
number of images i.e., 72 images, 3 images per person are
used in the existing paper. 12 males and 12 females’
images are used for this purpose and also with 30 males
and 30 females each having 3 images. Table 1, Fig. 11
(for detection and recognition rate) and Fig. 12 (for
processing time) shows that the features extraction rates
and the processing time of proposed technique is far better
than that of existing technique.
Similarly, Table 2, Fig. 11 (for detection and
recognition rate) and Fig. 12 (for processing time) show
that recognition rate and recognition time of proposed
technique is far better than that of existing technique.
In the features extraction process nose is also
extracted of all the tested images. The extraction result of
nose images (Detection rate) at 72 persons is 98 %
(Table 3):
The results on 72 images are also tested with right
eye of individuals. The results (Detection rate) using right
eye are 96% (Table 4):
Results on self-selected CVL database: The self-
selected CVL database contains 70 individuals with three
images per person. The images selected were frontal faces
with eyes visible and open. Table 5 shows that the
detection rate of facial features using proposed technique
is better than the existing technique.
Table 6: Recognition rate comparison
Recognition rate ( %)
------------------------------------------------------------
Facial features Existing (Yuen et al., 2010) Proposed
Eyes 79.17 95
Table 7: Nose extraction on CVL database
Feature Detection rate (%)
Nose 97
Table 8: Face features extraction results (%)
Detection rate (%)
----------------------------------------------------------
Facial features Existing (Yuen et al., 2010) Proposed
Eyes 93.08 99
Table 9: Face recognition rate %
Facial features Recognition rate (%)
----------------------------------------------------------
Existing (Yuen et al., 2010) Proposed
Eyes 79.17 94
Fig. 13: Detection and recognition comparison (%)
Recognition rate (%): The results for extraction on CVL
database is as follows Table 6 and Table 7
Results on database with illumination changes: The
database contains 20 persons with 8 images per person.
The results in Table 8 and 9 and Fig. 13 for the images
with illumination and background changes show the
strength of the proposed work.
CONCLUSION
Although numerous methods have been planned and
projected in the perspective of face recognition, little
work has been done on face identification based on the
facial features. In this study face detection, features
extraction and face recognition technique has been
presented. The main purpose of the system was to check
whether it is possible to efficiently recognize different
individuals using a small region of the face or not. For this
Res. J. App. Sci. Eng. Technol., 4(17): 2879-2886, 2012
2886
purpose the eye region of face is used to recognize
different individuals. Results were tested on three
databases. The results obtained are satisfactory and the
system can be used efficiently for recognition purpose for
different applications.
ACKNOWLEDGMENT
This research work is done by the authors under
department of computer science, COMSATS Institute of
Information Technology, Wah Cantt Campus, Pakistan
REFERENCES
Bartlett, M.S., J.R. Movellan and T.J. Sejnowski, 2002.
FaceRecognition by Independent Component
Analysis. IEEE Trans. Neural Networks, 13(6):
1450-1464.
Basavaraj, A. and P. Nagraj, 2006. The Facial Features
Extraction for face Recognition based on
Geometrical approach. Canadian Conference on
Electrical and Computer Engineering, CCECE’ 06,
P.D.A. College of Engineering, Gulbarga, pp:
1936-1939.
Belhumeur, P.N., J.P. Hespanha and D.J. Kriegman,
1997a. Eigenfaces vs. Fisherfaces: Recognition using
class specific linear projection. IEEE Trans. Pattern
Anal. Mach. Intell., 19(7): 711-720.
Belhumeur, V., J. Hespanha and D. Kriegman, 1997.
Eigenfaces vs.Fisherfaces: Recognition using class
specic linear projection. IEEE T. Pattern. Anal.,
19(7): 711-720.
Duc, B., S. Fischer and N.J. Bigu, 1999. Face
authentication with Gabor information on deformable
graphs. IEEE Trans. Image Proc., 8(4): 504-516.
Eickler, S., S. Mu( Ller and G. Rigoll, 2000. Recognition
of JPEG compressed face images based on statistical
methods. Image Vis. Comput., 18(4): 279-287.
Hiremath, P.S. and D. Ajit, 2004. Optimized geometrical
feature vector for face recognition. Proceedings of
the International Conference on Human Machine
Interface, Indian Institute of Science, Tata McGraw-
Hill, Bangalore, pp: 309-320, (ISBN: 0 07- 059757-
X).
Hiremath, P.S., D. Ajit and C.J. Prabhakar, 2007.
Modelling Uncertainty in Representation of Facial
Features for Face Recognition I-Tech. Vienna,
Austria, pp: 558. ISBN: 978-3-902613-03-5,
Hsu, R.L., M.A. Mottaleb and A.K. Jain, 2002. Face
detection in colour images. IEEE T. Pattern Anal.,
24(5): 696-706.
J'an, M. and Miroslav K., 2008, Human face and facial
feature tracking by using geometric and texture
models. J. Electr. En., 59(5): 266-271.
Khanfir, S. and Y.B. Jemaa, 2006. Automatic facial
features extraction for face recognition by neural
networks, 3rd International Symposium on
Image/Video Communications over fixed and mobile
networks (ISIVC), Tunisia.
Sawangsri, T., V. Patanavijit and S.S. Jitapunkul, 2004.
Face segmentation using novel skin-colour map and
morphological technique. Trans. Engine., Comp.
Technol., 2.
Turk, M. and A. Pentland, 1991. Eigenfaces for
recognition. J. Cogn. Neurosci., 3(1): 71-86.
Wiskott, L., J.M. Fellous, N. Kruger and C.V.D.
Malsburg, 1999. Face Recognition by Elastic Bunch
Graph Matching. Intelligent Biometric Techniques in
Fingerprint and Face Recognition, Chapter 11, pp:
355- 396.
Yousra, B.J. and K Sana, 2008. Automatic gabor features
extraction for face recognition using neural networks,
IEEE 3rd International Symposium on Image/Video
Communications over fixed and Mobile Networks
(ISIVC).
Yuen, C.T., M. Rizon, W.S. San and T.C. Seong, 2010.
Facial features for template matching based face
recognition. American J. Engine. Appl. Sci., 3(1):
899-903.
Yuille, L., D.S. Cohen and P.W. Hallinan, 1989. Feature
extraction from faces using deformable templates, In
Proceeding of CVPR, pp: 104-109.
Zhang, B.L., H. Zhang and S.S. Ge, 2004. Face
recognition by applying Wavelet subband
representation and kernel associative memory. IEEE
Trans. Neural Networ., 15(1).
Zhang, H., B. Zhang, W. Huang and Q. Tian, 2005.
Gabor wavelet associative memory for face
recognition. IEEE T. Neural. Networ., 16(1).
Zhao, W., R. Chellappa, P.J. Phillips and A. Rosenfeld,
2003. Face recognition: A literature survey. ACM
Comput. Surv., 35(4): 399-458.
... Face recognition technology [1] is essential research in computer vision and pattern recognition [2][3][4]. This technology determines data consistent visual features [5] of the face image. It is part of criminal investigation, authentication, and video surveillance [6]. ...
Article
Full-text available
Aim The proposed work aims to monitor real-time attendance using face recognition in every institutional sector. It is one of the key concerns in every organization. Background Nowadays, most organizations spend a lot of time marking attendance for a large number of individuals manually. Many technologies like Radio Frequency Identification (RFID), and biometric systems are introduced to overcome the manual attendance system. However, not all of these technologies are automatic, and people must queue to have their presence recorded. Objective The main objective of the system is to provide an automated attendance system with the help of face recognition owing to the difficulty in the manual as well as other traditional attendance systems. Methods The main objective of the system is to provide an automated attendance system with the help of face recognition owing to the difficulty in the manual as well as other traditional attendance systems. Results Using the web camera connected to the computer, face detection and recognition are performed, and recognized faces are attended. Here, the admin module and teacher modules are implemented with different functionalities to monitor attendance. Conclusion Experiment results get 94.5% accuracy of face detection and 98.5% accuracy of face recognition by using the Haarcascade classifier and LBPH algorithm. This application system will be simple to implement, accurate, and efficient in monitoring attendance in real-time.
Chapter
Full-text available
Recent machine learning approaches have been effective in Artificial Intelligence (AI) applications. They produce robust results with a high level of accuracy. However, most of these techniques do not provide human-understandable explanations for supporting their results and decisions. They usually act as black boxes, and it is not easy to understand how decisions have been made. Explainable Artificial Intelligence (XAI), which has received much interest recently, tries to provide human-understandable explanations for decision-making and trained AI models. For instance, in digital agriculture, related domains often present peculiar or input features with no link to background knowledge. The application of the data mining process on agricultural data leads to results (knowledge), which are difficult to explain. In this paper, we propose a knowledge map model and an ontology design as an XAI framework (OAK4XAI) to deal with this issue. The framework does not only consider the data analysis part of the process, but it takes into account the semantics aspect of the domain knowledge via an ontology and a knowledge map model, provided as modules of the framework. Many ongoing XAI studies aim to provide accurate and verbalizable accounts for how given feature values contribute to model decisions. The proposed approach, however, focuses on providing consistent information and definitions of concepts, algorithms, and values involved in the data mining models. We built an Agriculture Computing Ontology (AgriComO) to explain the knowledge mined in agriculture. AgriComO has a well-designed structure and includes a wide range of concepts and transformations suitable for agriculture and computing domains.
Chapter
Considering that the performance of personality predictors are not consistently increasing throughout the years, as an alternative, the idea was to provide big5 personality traits lexicon based on the Essay dataset. The weight of relevance for every trait of each word is calculated by tf/idf on the words in the Essay dataset. However, it was eventually realized that three personalities wordset were overlapped requiring a change of course. The research changed the underlying model from big5 to Vann Joines’ mental processes as it appear to suit better with the empirical findings. The resulting lexicon dataset is composed with 3432 commonly used words and 3639 avoided prone words for each mental process. The commonly used words are capable of covering 81% of the Twitter Personality dataset.
Chapter
It is possible for inexpensive cameras to include AI based features such as face recognition. However, a test framework for such cameras is required that will allow comparison of accuracy under differing conditions. This will then lead to the improvement of training data and algorithms. A simple test framework has been developed and partially evaluated by testing multiple head/face accessories under different lighting conditions. Six participants took part and 300 pictures using a Huskylens were taken under a range of conditions. It was found that the camera could detect faces at a reasonable level of accuracy during ‘middle of the day’ lighting conditions, with or without head accessories. However, it delivers significantly lower detection rate with accessories that cover greater parts of the face and under green light. There is still a need to further investigate this area of study with a higher number of participants in a more controlled environment. It is anticipated that better testing frameworks will lead to better algorithms, training data and specifications for users.
Conference Paper
A task to identify occupied seats in government buses for passengers emerges in the digital communicating world. In this project, the second stage is verifying a system with tagged seat sense and a camera for detecting whether the seat is occupied. It sees the availability or occupancy status of seats in any coach. In this work, detection of face overcomes the drawback caused due to seat sensor that indicates the seat is occupied when any objects are placed. The camera is inserted in the buses, connected with Raspberry Pi 3 B+ microcontroller. When the vehicle starts from its corresponding station, the images of people traveling are captured, adjusted, and improved by software communication. The transferred images are processed to count the commuters in the vehicle to find the vacancy of seats so that the passengers waiting in various stops can monitor the seat is available or not.
Chapter
In today’s fast-paced world, disabled people are a large minority group, starved of services, mostly ignored by society, and live in isolation, segregation, poverty, charity and even pity. There are numerous forms of disabilities. The disability suffered by most persons includes mental disability, emotional, physical and cognitive. Perhaps the most overlooked effect of a disability that affects the motor functions of the limb is the reliance on other people for the completion of even simple tasks that ordinary people perform on a daily basis, like taking a shower, dressing up, brushing teeth, or even having a meal. This chips away at the self-worth of a disabled person and gnaws away at their confidence. Through our project, we aim to provide a solution to those with compromised motor functions. This project aims to develop a 4 DOF robotic manipulator that is able to map the facial structure of the user, and with a feeding device (spoon/fork) attached to its arm transfers adequate portion of food accurately into the user’s mouth without spillage through smooth motion, by incorporating Image Processing, Manipulator Kinematics and Machine Learning.KeywordsRoboticsKinematicsImage processingMotion planningDesignAutonomousManufacturingHealthcare
Chapter
The support structures used in additive manufacturing (AM) techniques cause poor surface quality in the contact areas of the support with the printed parts. Thereby, this support structure usage is a critical issue that needs to be controlled for minimizing the printing time and post-processing challenges associated with AM parts. To alleviate the errors due to the support structures, these faceted models can be deposited in multi-direction using multi-axis systems. This paper presents an approach to decompose the design features and detect multiple build directions for certain cases of faceted models to print parts using multi-robots collaborative material extrusion (MRCME) systems developed by the authors. The use of support structures limits the surface finish of as-built 3D printed parts. The objective of this work is to develop a volume decomposition algorithm, which can decompose the part into sub-volumes and identify the build direction of the decomposed sub-volumes. The build direction is found from the geometric reasoning of the concave and convex loop centroids. Initial decomposition is done with the concave–convex loop pair relationship, and further regrouping is done with the bounding box of the identified pair of concave and convex loops in that particular build direction. The work presented in this paper would help process planning systems to automatically determine Multi-Directional Sub-Volumes (MDSV) and carry out effective part deposition with minimum or no support structures.KeywordsMulti-robots technologyMulti-axis depositionCobotic material extrusionPost-processing
Chapter
Full-text available
The smart manufacturing revolution is continuously enabling the manufacturers to achieve their prime goal of producing more and more products with higher quality at a minimum cost. The crucial technologies driving this new era of innovation are machine learning and artificial intelligence. Paving to the advancements in the digitalization of the production and manufacturing industry and with a lot of available data, various machine learning techniques are employed in manufacturing processes. The main aim of implementing the ML techniques being to save time, cost, resources and avoid possible waste generation. This paper presents a systematic review focusing on the application of various machine learning techniques to different manufacturing processes, mainly welding (arc welding, laser welding, gas welding, ultrasonic welding, and friction stir welding), molding (injection molding, liquid composite, and blow molding) machining (turning, milling, drilling, grinding, and finishing), and forming (rolling, extrusion, drawing, incremental forming, and powder forming). Moreover, the paper also reviews the aim, purpose, objectives, and results of various researchers who have applied AI/ML techniques to a wide range of manufacturing processes and applications.
Article
Full-text available
ABSTRACT: Internet of Things (IoT) Systems consists of electrical and electronic elements and can be used to perform safety functions in industrial and road vehicles. The International Electro-technical Commission (IEC) 61508 is the standard applied for functional safety of Electronic (E) /Electrical (E)/Programmable Electronics (PE) and other safety related system; whereas ISO 26262 is applicable for road vehicles. IEC 61508 is generic standard which applies to E/E/PE systems irrespective of their application and provides guidelines on product development in compliance with safety features. ISO 26262 follow on standard of IEC 61508 which focus on road safety for electrical or electronic systems. This paper helps to understand key terminologies of these two standards and helps developers to asses SIL/ASIL (Safety Integrity Level/ Automotive Safety Integrity Level) levels. It provides comparison on key metrics calculation between ISO and IEC. If a product developers achieve certain SIL in compliance with IEC standard, it gives and assessment claim to convert to ISO ASIL level and vice versa. Keywords: Standards; Faults; Safety; Diagnostic coverage; Metrics
Article
Full-text available
This paper deals with the tracking system of the human face and facial feature based on the human face texture model combined with an algorithm for adaptation of the wireframe 3D model Candide-3 to the human face images. The human face texture model is represented by a set of eigenfaces which are obtained by means of the principal component analysis of the training set of completely preprocessed textures. The algorithm for adaptation of 3D model needs a reasonable starting approximation and an update matrix calculated from the training set by manual deviation of 3D model for single components of the parameter vector. The designed tracking system was tested on a real videosequence with various conditions and for adaptation of 3D model both global motion parameters and animation parameters of the mouth were used.
Chapter
Full-text available
We develop a face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face under varying illumination direction lie in a 3-D linear subspace of the high dimensional feature space — if the face is a Lambertian surface without self-shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's Linear Discriminant and produces well separated classes in a low-dimensional subspace even under severe variation in lighting and facial expressions. The Eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed Fisherface method has error rates that are significantly lower than those of the Eigenface technique when tested on the same database.
Chapter
Full-text available
Face is a more common and important biometric identifier for recognizing a person in a non-invasive way. The face recognition involves identification of the facial features, namely, eyes, eyebrows, nose, mouth, ears, and their spatial interrelationships uniquely. The variability in the facial features of the same human face due to changes in facial expressions, illumination and poses shall not alter the face recognition. In the present chapter we have discussed the modeling of the uncertainty in information about facial features for face recognition under varying face expressions, poses and illuminations. There are two approaches, namely, fuzzy face model based on fuzzy geometric rules and symbolic face model based on extension of symbolic data analysis to PCA and its variants. The effectiveness of these approaches is demonstrated by the results of extensive experimentation using various face databases, namely, ORL, FERET, MIT-CMU and CIT. The fuzzy face model as well as symbolic face model are found to capture variability of facial features adequately for successful face detection and recognition.
Article
We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.
Article
The paper proposes a novel algorithm to dynamically define the Region of Interest (ROI) videophone application. The algorithm uses the color information Hue and Cr to find the skin-color pixels and also use range of threshold obtained from red and blue components in normalized RGB color space to remove nonskin-color pixels because the human skin tends to have a predominance of red and non-predominance of blue. Post-processing is used to remove such noises by a morphological operator. Moreover, the algorithm performs temporal filtering to remove skin-color pixels that immediately appear from frame to frame by using object tracking process to perform as memory for collecting skin-color objects obtained from previous frame to guide the next frame. The experimental results confirm the effectiveness of the proposed algorithm. Keywords—face segmentation, human skin segmentation, video object segmentation, Region-of-Interest (ROI) video coding.
Conference Paper
Over the last two decades, several different techniques have been proposed for face recognition, which is one of the challenging areas of research in the field of image processing, pattern recognition and vision applications. Automatic human face identification system, e.g. security checks and crime investigation, etc. involves face recognition. The basic process consists of extraction of potential, facial features such as eyes, nose, mouth, eyebrows, etc. In the present paper, a geometrical face model proposed by Shi-Hong Jeng et al. for frontal face images is improved by the inclusion of ears and chin also as potential facial features, since it enhances the discrimination ability of the proposed face model during face recognition .The developed approach is divided into four main steps. The first step is pre processing, the goal of this step is to get rid of high intensity noises and to transform the input image into binary one. The second step includes a labeling process, which label the facial feature candidates by block by block. Then find the center, area and the orientation of each feature candidate. Third step is a geometrical model, used to measure relative distances and to locate the actual position of the entire facial features. Finally, the matching process. The modified face model has been experimented with test images and an enhanced success rate of 94% is achieved
Conference Paper
In this paper we present a biometric system of face detection and recognition in color images. The face detection technique is based on skin color information. A new algorithm is proposed in order to detect automatically face features (eyes, mouth and nose) and extract their correspondent geometrical points. These fiducial points are described by sets of wavelet components called "jets" which are used for recognition. To achieve the face recognition, we propose two architectures of neural networks and we compare their performances. We also, compare the two types of features used for recognition: geometric distances and Gabor coefficients which can be used either independently or jointly. This comparison shows that Gabor coefficients are more powerful than geometric distances. We show with experimental results how the importance recognition ratio makes our system an effective tool for automatic face detection and recognition.
Article
A face recognition system based on 2D DCT features and pseudo 2D Hidden Markov Models is presented. An extension of the system is capable of recognizing faces by using JPEG compressed image data. Experiments to evaluate the proposed approach are carried out on the Olivetti Research Laboratory (ORL) face database. The recognition rates are 100% for the uncompressed original images and 99.5% for JPEG compressed domain recognition. A comparison with other face recognition systems evaluated on the ORL database, shows that these are the best recognition results ever reported on this database.
Article
A method for detecting and describing the features of faces using deformable templates is described. The feature of interest, an eye for example, is described by a parameterized template. An energy function is defined which links edges, peaks, and valleys in the image intensity to corresponding properties of the template. The template then interacts dynamically with the image, by altering its parameter values to minimize the energy function, thereby deforming itself to find the best fit. The final parameter values can be used as descriptors for the features. This method is demonstrated by showing deformable templates detecting eyes and mouths in real images