Table 1 - uploaded by Mariofanna Milanova
Content may be subject to copyright.
Source publication
In this paper, we proposed EREGE system, EREGE system considers as a face analysis package including face detection, eye detection, eye tracking, emotion recognition, and gaze estimation. EREGE system consists of two parts; facial emotion recognition that recognizes seven emotions such as neutral, happiness, sadness, anger, disgust, fear, and surpr...
Context in source publication
Similar publications
Direct classification of normalized and flattened 3D facial landmarks reconstructed from 2D images is proposed in this paper for recognizing eight types of facial expressions depicting the emotions of- sadness, anger, contempt, disgust, fear, happiness, neutral and surprised. The first stage is the 3D projection of 2D facial landmarks. The pre-trai...
Citations
... This method usually needs for the VJD detector to find the head and delimit the region containing the face. As a first approach, in Anwar et al. (2018) utilized the active shape model (ASM) (Cootes and Taylor 1992), which derives from point distribution model (PDM), a method that defines a non-rigid object's contour from a collection of images including annotated points of interest. Scaling the principal components of the deformable model makes it possible its adaptation to the border of the object in a new image. ...
... Applying the eye detection algorithms frame by frame may need important computational resources. To reduce it, some authors proposed the use of well-known tracking techniques such as Lucas-Kanade (LK) (Ferhat et al. 2015;Anwar et al. 2018). ...
... Most authors use grayscale images in the processing pipeline. The iris center position can be obtained by searching the center of mass (CoM) of the darkest pixels in the eye area (Rondio et al. 2012;Anwar et al. 2018). Bilateral filters (Zheng and Usagawa 2018), which preserve edge) can be applied to filter noisy points in low-quality images. ...
This paper is the first of a two-part study aiming at building a low-cost visible-light eye tracker (ET) for people with amyotrophic lateral sclerosis (ALS). The whole study comprises several phases: (1) analysis of the scientific literature, (2) selection of the studies that better fit the main goal, (3) building the ET, and (4) testing with final users. This document basically contains the two first phases, in which more than 500 studies, from different scientific databases (IEEE Xplore, Scopus, SpringerLink, etc.), fulfilled the inclusion criteria, and were analyzed following the guidelines of a scoping review. Two researchers screened the searching results and selected 44 studies (-value = 0.86, Kappa Statistic). Three main methods (appearance-, feature- or model- based) were identified for visible-light ETs, but none significantly outperformed the others according to the reported accuracy -p = 0.14, Kruskal–Wallis test (KW)-. The feature-based method is abundant in the literature, although the number of appearance-based studies is increasing due to the use of deep learning techniques. Head movements worsen the accuracy in ETs, and only a very few numbers of studies considered the use of algorithms to correct the head pose. Even though head movements seem not to be a big issue for people with ALS, some slight head movements might be enough to worsen the ET accuracy. For this reason, only studies that did not constrain the head movements with a chinrest were considered. Five studies fulfilled the selection criteria with accuracies less than
, and one of them is illuminance invariant.