Real-time iris detection on faces with coronal axis rotation.
ABSTRACT Real-time face and iris detection on video sequences is important in diverse applications such as, study of the eye function, drowsiness detection, virtual keyboard interfaces, face recognition and multimedia retrieval. In previous work we developed a non-invasive real time iris detection method consisting of three stages: coarse face detection, fine face detection and iris boundary detection. In this paper, iris detection is considered on faces with rotations in the coronal axis within the range -40° to 40°. It is shown that a line integral over the directional image as a function of the template rotation, has a maximum when the face and template coincide in rotation angle. The method was applied on 10 video sequences, with a total of 6470 frames, from different subjects rotating their faces in the coronal axis. Results of correct face detection on 8 video sequences were 100%, one reached 99.9% and one 98.2%. Results on correct iris detection are above 96% in 9 of the video sequences and one reached 77.8%. The method was implemented in real-time (30 frames per second) with a PC 1.8 GHz.
- SourceAvailable from: Linh Manh Pham[Show abstract] [Hide abstract]
ABSTRACT: Real-time eye and iris tracking is important for handsoff gaze-based password entry, instrument control by paraplegic patients, Internet user studies, as well as homeland security applications. In this project, a smart camera, LabVIEW and vision software tools are utilized to generate eye detection and tracking algorithms. The algorithms are uploaded to the smart camera for on-board image processing. Eye detection refers to finding eye features in a single frame. Eye tracking is achieved by detecting the same eye features across multiple image frames and correlating them to a particular eye. The algorithms are tested for eye detection and tracking under different conditions including different angles of the face, head motion speed, and eye occlusions to determine their usability for the proposed applications. This paper presents the implemented algorithms and performance results of these algorithms on the smart camera.01/2011;
- [Show abstract] [Hide abstract]
ABSTRACT: In this article we report a new method for gender classification from frontal face images using feature selection based on mutual information and fusion of features extracted from intensity, shape, texture, and from three different spatial scales. We compare the results of three different mutual information measures: minimum redundancy and maximal relevance (mRMR), normalized mutual information feature selection (NMIFS), and conditional mutual information feature selection (CMIFS). We also show that by fusing features extracted from six different methods we significantly improve the gender classification results relative to those previously published, yielding 99.13% of the gender classification rate on the FERET database.International Journal of Optomechatronics 01/2012; 6(1):92-119. · 0.43 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Real-time iris and face detection on video sequences is important in applications such as study of the eye function, drowsiness detection, man-machine interfaces, face recognition security and multimedia retrieval. In this work, we present a real-time template based method for iris detection in faces with wide coronal (− 40°, + 40°) and transversal (− 45°, + 45°) axis rotations. This method is based on anthropometric templates that were constructed off-line for face coronal and transversal rotation, using face features such as elliptical shape, location of the eyebrows, nose and lips. A line integral is computed using these templates over the fine directional image to find the actual face location, face size and rotation angle. This information provides a region to search for the eyes and the iris boundary is detected. Results computed on five video sequences including coronal and transversal rotations with over 1,700 frames show correct face detection rate of 98.5% and iris detection rate of 94.4%. The method was compared with a “weighting mask method” on two video sequences showing an improved performance. The method was also compared for eye detection to a method using combined binary edge and intensity information in two subsets of the AR face database (63 and 564 images). Different disparity errors were considered and for the smallest error, a 100% correct detection was reached in the AR-63 subset and 99.8% was obtained in the AR-564 subset.International Journal of Optomechatronics 01/2009; 3(1):54-67. · 0.43 Impact Factor