Approach of human face recognition based on SIFT feature extraction and 3D rotation model

Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong, China
01/2011; DOI: 10.1109/ICINFA.2011.5949039

ABSTRACT One of the main problems in face recognition is the influences of varying poses and illumination. This paper proposes a novel method of human face recognition to overcome the influences. The method is mainly based on the SIFT feature extraction and 3D rotation model of heads. SIFT descriptor is used to select key points of faces in the database including seventy people with nine poses in the first stage. Then according to the feature of a test face, matching algorithm is applied to find its candidates from the database and defines some criteria to convince the final matching result in the second stage. If satisfactory results can not be gained in the second stage, the 3D rotation method will be triggered and it makes a secondary decision by normalizing the depth information of the faces. This algorithm is tested in the face database and the result shows that the accuracy is as high as 94.45%.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Video surveillance and face recognition systems have become the subject of increased interest and controversy after the September 11 terrorist attacks on the United States. In favor of face recognition technology, there is the lure of a powerful tool to aid national security. On the negative side, there are fears of an Orwellian invasion of privacy. Given the ongoing nature of the controversy, and the fact that face recognition systems represent leading edge and rapidly changing technology, face recognition technology is currently a major issue in the area of social impact of technology. We analyze the interplay of technical and social issues involved in the widespread application of video surveillance for person identification.
    IEEE Technology and Society Magazine 02/2004; 23(1-23):9 - 19. DOI:10.1109/MTAS.2004.1273467 · 0.49 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.
    Journal of Cognitive Neuroscience 01/1991; 3(1):71-86. DOI:10.1162/jocn.1991.3.1.71 · 4.69 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed “Fisherface” method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases
    IEEE Transactions on Pattern Analysis and Machine Intelligence 07/1997; 19(7):711-720. DOI:10.1109/34.598228 · 5.69 Impact Factor


Available from