Figure 1 - uploaded by Tamás Szirányi
Content may be subject to copyright.
The defined geometry of the face model Facial features could be found also by calculating the vertical/horizontal gradients of the input image [16]. However, we decided to design an algorithm with binary-output operations, because they are more robust on the ACE4K [14]. Consequently, in our facial feature extraction system the input is the edge map of a face image. The procedure of locating the facial features using the above defined geometric face model is described below. First, a Sobel operation is applied to the face image. After thresholding the face symmetry axis is detected and then, based on the defined face model, the nose is 

The defined geometry of the face model Facial features could be found also by calculating the vertical/horizontal gradients of the input image [16]. However, we decided to design an algorithm with binary-output operations, because they are more robust on the ACE4K [14]. Consequently, in our facial feature extraction system the input is the edge map of a face image. The procedure of locating the facial features using the above defined geometric face model is described below. First, a Sobel operation is applied to the face image. After thresholding the face symmetry axis is detected and then, based on the defined face model, the nose is 

Source publication
Article
Full-text available
Novel on-chip algorithms are proposed for face analysis on pictures obtained from a multi-camera surveillance system. According to the objective of the face detection sub-system, spatial positional relations to be presented for which the methodology we used is detailed in the paper. The uniqueness of the approach is the methodology we applied provi...

Similar publications

Article
Full-text available
This paper anticipated the design of a novel Vedic Multiplier using the techniques of Ancient Indian Vedic Mathematics that have been modified to improve performance. A high speed processor depends greatly on the multiplier as it is one of the key hardware blocks in most digital signal processing system as well as in general processors. Currently t...
Article
Full-text available
Abstract The increasing complexity of VLSI digital systems has dramatically supported system-level representations in modeling and design activities. This evolution makes often necessary a compliant rearrangement of the modalities followed in validation and analysis tasks, as in the case of power performances estimation. Nowadays, transaction-level...

Citations

... The components are either assumed to be the holes in the detected facial regions, features computed in given color spaces, or the darkest region of the face. A geometrical technique is used by Zoltan and Tamas [8], to detect and extract facial components. They compute the facial symmetry axis, and use it to deduce the nose region based on the assumption that the region of the nose is the most vertically detailed region on a face. ...
Conference Paper
Full-text available
Component-based automatic face recognition has been of interest to a growing number of researches in the past fifteen years. In this paper, we present an approach for detecting eyes, nose and mouth in a gray scale image for face recognition. The image is first binarized, and the connected components of the resulting image are detected and labelled. An iterative strategy is used to remove the irrelevant components. The iteration stops when the remaining components are most probably the targeted components: eyes, nose and mouth. Our approach has the advantage that it is straightforward and fast, and there is no manual interaction in choosing and extracting face components. Experiments show that our approach provides promising results as it performs automatically without any assumption about the location of the face components, as well as in different orientations of face.
... The components are either assumed to be the holes in the detected facial regions, features computed in given color spaces, or the darkest region of the face. A geometrical technique is used by Zoltan and Tamas [102] to detect and extract facial components. They compute the facial symmetry axis and use it to deduce the nose region based on the assumption that the region of the nose is the most vertically detailed region on a face. ...
... Instead of using a geometrical estimation or an assumption about the location of face components as done in previous works [12, 91, 102, 97, 84], our approach exploits the pixel coordinates of each detected component to determine its location in the face. ...
... The components are either assumed to be the holes in the detected facial regions, features computed in given color spaces, or the darkest region of the face. A geometrical technique is used by Zoltan and Tamas [9] to detect and extract facial components. They compute the facial symmetry axis and use it to deduce the nose region based on the assumption that the region of the nose is the most vertically detailed region on a face. ...
... The input to our validation model are the centroids of the detected facial components. Lets consider the finite set of points, Sn S n = {P i , i = 1, .., n} (9) where n is the number of detected components, and Pi the centroid of the i th detected component. The convex hull of Sn is defined as the smallest 2D polygon Ω that contains S n [20]. ...
Article
Full-text available
In the face recognition research domain, features-based ap-proach have been widely used in many works during the recent years. However, only few of these have used a validation step to assess whether the detected facial components were those appropriate for the recognition. In this paper, we present an ap-proach of detecting and validating facial components from gray scale images. We first binarize the image. Thereafter, the con-nected components of the resulting image are detected and la-belled. An iterative strategy is applied to remove the irrelevant components. The iteration terminates when the remaining com-ponents are the targeted components: eyes, nose and mouth. Afterward, we compute the centroids of the detected compo-nents. The convex hull of these centroids is computed and the validity of the detected components are further assessed, by ap-plying the k-means on the features extracted from the angles at the two lowest points of the convex hull. Our approach has the advantage that it is straightforward and fast, and there is no manual interaction in choosing and extracting face components. Experiments show that our approach provides promising results as it performs automatically without any assumption about the location of face components as well as in different orientations of face. Furthermore, our work is a great contribution in the features-based face recognition research domain as the earlier detection of the wrong detected set of facial components could increase the efficiency and the speed of the recognition consid-erably.
... Hsu's research [5] extracted face feature frontally to get the eyes and the mouth parts, and the triangle between them. Zlávik [6] realized that eyes have proportional distances from other face features. If one feature is detected, the position of other features would be obtained and the feature could be extracted. ...
... 3) Based on former researches [6], [8], nose extraction is conducted after the distance between left and right eyes' midpoints is determined. Nose height and width is determined as shown in Figure 7. ...
Article
Biometric technology has been frequently utilized by researchers in identifying and recognizing human features. This technology identifies human's unique and static body parts, such fingerprints, eyes, and face. The identification and recognition of a human face use face features' processing and analysis. This consists of determining face components' region and their characteristics, which establishes the role of individual component in face recognition. This research develops a system that separates face features into face components, and extracts the eyes, nose, mouth, and face boundary. This process conducted on a frontal single still image. Distances between components are measured, and then combined with other features to construct face semantic. Distances between features are determined by going through the process of face detection based on skin color, cropping to normalize face region, and extraction of eyes, nose, and mouth features. Test of uniqueness on 150 samples gives a result that a minimal of five faces features' distances are needed to get the uniqueness of face features' distances. This research shows that the determination of face features and face components' distances can be used to identify a face as a subsystem of a face recognition system.
Chapter
MR is the most sensitive clinical tool in the diagnosis and monitoring of multiple sclerosis (MS) alterations. Spinal cord (SC) evaluation has gained interest in this clinical scenario in the last 10 years but unlike in brain, there is a lack of algorithms assisting SC segmentation. Our goal was to investigate and develop an automatic MR cervical SC segmentation method that would enable seamless imaging biomarkers extraction related to SC atrophy and lesion infiltration. This algorithm was developed using a dataset based on real-world MR data of 121 MS patients. 96 cases were used as training data and the remaining 25 cases were retained as the testing data. MR sequences used consisted of 3D-T1 gradient echo MR axial images, acquired in a 3T system (SignaHD-USA), (TE/TR/FA:1.7–2.7 ms/5.6–8.2 ms/12°). Manual labeling ground-truth is performed under radiologist supervision. The architecture of the 2D convolutional neural network consisted of a hybrid residual attention aware segmentation method trained to extract the region of interest. The training was designed with a focal loss function based on the Tversky-index to address the issue of label imbalance in medical image segmentation and an automatic optimal learning rate finder. Our model provided an automated and accurate method achieving a DICE coefficient of 0.87. An automatic method for SC segmentation from MR was successfully implemented. It will have direct implications for accelerating the process for MS diagnosis, follow-up and extraction of imaging biomarkers.
Article
Privacy of personal resources and preventing using them from non authorized, becomes necessity of internet society, therefore it is required from the scientist to find methods to secure using them. This paper presents a model for securing the privacy based on face image differences. Paper shows practically how digital statistic parameters can be calculated from the given face image then use them to distinguish people. Codes are written in Vbasic and Matlab languages. The new model recognizes 99.4% of internet collected face images, 98% of face images taken by digital camera, and 99.25% of total face images. Results shows that face can be used to distinguish people with minor error and enhances security with suitable speed. Keyword: digital image, image enhancement, face image, and matching face images.
Conference Paper
Full-text available
Passwords are no longer valid for accessing personal resources, because it is easy to forge them, in either small or big systems. Today's challenge is to find other alternatives for passwords. Biometric is one of the ways, by mean of using human characteristics in accessing system resources. Fingerprint and face can be used to secure systems, because fingerprint is a unique, un-repeatable. Using it with face it can be widely acceptable. This paper present a model of fingerprint and face recognition to gain access to sensitive information for only authorized person. The method is based on recognition of the concatenated fingerprint and face images known as fingerface. A small size of fingerface images are used to extract information. Area under the curve resulted from connecting highest peaks are used in matching parameter. Other parameters are used to ensure a perfect matching. Our method takes into account a minimum image size and minimum number of searches. As a result we get fast in matching process.
Article
Full-text available
The use of thermal images of a selected area of the head in screening systems, which perform fast and accurate analysis of the temperature distribution of individual areas, requires the use of profiled image analysis methods. There exist methods for automated face analysis which are used at airports or train stations and are designed to detect people with fever. However, they do not enable automatic separation of specific areas of the face. This paper presents an algorithm for image analysis which enables localization of characteristic areas of the face in thermograms. The algorithm is resistant to subjects’ variability and also to changes in the position and orientation of the head. In addition, an attempt was made to eliminate the impact of background and interference caused by hair and hairline. The algorithm automatically adjusts its operation parameters to suit the prevailing room conditions. Compared to previous studies (Marzec et al., J Med Inform Tech 16:151–159, 2010), the set of thermal images was expanded by 34 images. As a result, the research material was a total of 125 patients’ thermograms performed in the Department of Pediatrics and Child and Adolescent Neurology in Katowice, Poland. The images were taken interchangeably with several thermal cameras: AGEMA 590 PAL (sensitivity of 0.1 °C), ThermaCam S65 (sensitivity of 0.08 °C), A310 (sensitivity of 0.05 °C), T335 (sensitivity of 0.05 °C) with a 320 × 240 pixel optical resolution of detectors, maintaining the principles related to taking thermal images for medical thermography. In comparison to (Marzec et al., J Med Inform Tech 16:151–159, 2010), the approach presented there has been extended and modified. Based on the comparison with other methods presented in the literature, it was demonstrated that this method is more complex as it enables to determine the approximate areas of selected parts of the face including anthropometry. As a result of this comparison, better results were obtained in terms of localization accuracy of the center of the eye sockets and nostrils, giving an accuracy of 87 % for the eyes and 93 % for the nostrils.
Conference Paper
The following paper presents a new direction that opens for deformable grid-based object recognition methods, due to introduction of their efficient, parallel implementations. A substantial increase in object recognition performance can be expected when several different features are used to build a class prototype. This would imply extending complexity of image analysis, through an application of several image characteristics in image-model matching. To make such an approach computationally feasible, a CNN is considered as ultra-fast tool for performing grid-matching process. Sample task of face recognition, which is well-suited for being tackled with deformable grids, is used to evaluate a performance of the proposed approach, yielding an expected increase in correct classification rate.