Conference Paper

Real-time iris detection on faces with coronal axis rotation.

Dept. of Electr. Eng., Chile Univ., Santiago, Chile
DOI: 10.1109/ICSMC.2004.1401404 Conference: Proceedings of the IEEE International Conference on Systems, Man & Cybernetics: The Hague, Netherlands, 10-13 October 2004
Source: DBLP

ABSTRACT Real-time face and iris detection on video sequences is important in diverse applications such as, study of the eye function, drowsiness detection, virtual keyboard interfaces, face recognition and multimedia retrieval. In previous work we developed a non-invasive real time iris detection method consisting of three stages: coarse face detection, fine face detection and iris boundary detection. In this paper, iris detection is considered on faces with rotations in the coronal axis within the range -40° to 40°. It is shown that a line integral over the directional image as a function of the template rotation, has a maximum when the face and template coincide in rotation angle. The method was applied on 10 video sequences, with a total of 6470 frames, from different subjects rotating their faces in the coronal axis. Results of correct face detection on 8 video sequences were 100%, one reached 99.9% and one 98.2%. Results on correct iris detection are above 96% in 9 of the video sequences and one reached 77.8%. The method was implemented in real-time (30 frames per second) with a PC 1.8 GHz.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Real-time eye and iris tracking is important for handsoff gaze-based password entry, instrument control by paraplegic patients, Internet user studies, as well as homeland security applications. In this project, a smart camera, LabVIEW and vision software tools are utilized to generate eye detection and tracking algorithms. The algorithms are uploaded to the smart camera for on-board image processing. Eye detection refers to finding eye features in a single frame. Eye tracking is achieved by detecting the same eye features across multiple image frames and correlating them to a particular eye. The algorithms are tested for eye detection and tracking under different conditions including different angles of the face, head motion speed, and eye occlusions to determine their usability for the proposed applications. This paper presents the implemented algorithms and performance results of these algorithms on the smart camera.
    01/2011; DOI:10.1109/AIPR.2011.6176373
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this article we report a new method for gender classification from frontal face images using feature selection based on mutual information and fusion of features extracted from intensity, shape, texture, and from three different spatial scales. We compare the results of three different mutual information measures: minimum redundancy and maximal relevance (mRMR), normalized mutual information feature selection (NMIFS), and conditional mutual information feature selection (CMIFS). We also show that by fusing features extracted from six different methods we significantly improve the gender classification results relative to those previously published, yielding 99.13% of the gender classification rate on the FERET database.
    International Journal of Optomechatronics 01/2012; 6(1):92-119. DOI:10.1080/15599612.2012.663463 · 0.57 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we report our extension of the use of feature selection based on mutual information and feature fusion to improve gender classification of face images. We compare the results of fusing three groups of features, three spatial scales, and four different mutual information measures to select features. We also showed improved results by fusion of LBP features with different radii and spatial scales, and the selection of features using mutual information. As measures of mutual information we use minimum redundancy and maximal relevance (mRMR), normalized mutual information feature selection (NMIFS), conditional mutual information feature selection (CMIFS), and conditional mutual information maximization (CMIM). We tested the results on four databases: FERET and UND, under controlled conditions, the LFW database under unconstrained scenarios, and AR for occlusions. It is shown that selection of features together with fusion of LBP features significantly improved gender classification accuracy compared to previously published results. We also show a significant reduction in processing time because of the feature selection, which makes real-time applications of gender classification feasible.
    IEEE Transactions on Information Forensics and Security 03/2013; 8(3):488-499. DOI:10.1109/TIFS.2013.2242063 · 2.07 Impact Factor