Conference Paper

# Face Recognition Based on Discriminant Evaluation in the Whole Space

Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ.

DOI: 10.1109/ICASSP.2007.366218 Conference: Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on, Volume: 2 Source: IEEE Xplore

- [Show abstract] [Hide abstract]

**ABSTRACT:**We present a prediction and regularization strategy for alleviating the conventional problems of LDA and its variants. A procedure is proposed for predicting eigenvalues using few reliable eigenvalues from the range space. Entire eigenspectrum is divided using two control points, however, the effective low-dimensional discriminative vectors are extracted from the whole eigenspace. The estimated eigenvalues are used for regularization of eigenfeatures in the eigenspace. These prediction and regularization enable to perform discriminant evaluation in the full eigenspace. The proposed method is evaluated and compared with eight popular subspace based methods for face verification task. Experimental results on popular face databases show that our method consistently outperforms others.Pattern Recognition Letters. 01/2010; 31:717-724. - [Show abstract] [Hide abstract]

**ABSTRACT:**Face verification is different from face identification task. Some traditional subspace methods that work well in face identification may suffer from severe over-fitting problem when applied for the verification task. Conventional dis-criminative methods such as linear discriminant analysis (LDA) and its variants are highly sensitive to the training data, which hinders them from achieving high verification accuracy. This work proposes an eigenspectrum model that allevi-ates the over-fitting problems by replacing the unreliable small and zero eigenvalues with the model values. It also enables the discriminant evaluation in the whole space to extract the low dimensional features effectively. The proposed approach is evaluated and compared with 8 popular subspace based methods for a face verification task. Experimental results on three face databases show that the proposed method consistently outperforms others.The Open Artificial Intelligence Journal. 01/2008; 2(1). -
##### Conference Paper: Dimensionality reduction in subspace face recognition

[Show abstract] [Hide abstract]

**ABSTRACT:**Numerous face recognition algorithms use principal component analysis (PCA) as the first step for dimensionality reduction (DR) followed by linear discriminant analysis (LDA). PCA is applied in the beginning because it performs the DR in the minimum square error sense and achieves the most compact representation of data. However, they lack discrimination ability. To optimize classification, LDA and its variants are applied to the PCA reduced subspace so that the transformed data achieves minimum within-class variation and maximum between-class variations. In this paper, we study total, within-class and between-class scatter matrices and their roles in DR or feature extraction with good discrimination ability. The number of dimensions retained in DR plays a very crucial role for subsequent discriminant analysis. We reveal some important aspect of how recognition rate varies using different scatter matrices and their stepwise DR. Experimental results on popular face databases are provided to support our findings.Information, Communications & Signal Processing, 2007 6th International Conference on; 01/2008

Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.