Fig 2 - uploaded by Mariofanna Milanova
Content may be subject to copyright.
Source publication
Context in source publication
Context 1
... system achieved a correctness score of 93% [1]. The tests for eye gaze estimation were conducted on a group of 5 people. The test of indicating the gaze direction accuracy consists of defining a difference between a line direction on a screen and the gaze at which an examined person is looking, and which is determined by the program as shown in Fig. 2. To achieve the goal, an examined person was requested to visually follow a point moving between determined locations ...
Similar publications
Citations
... The algorithm presented in [9] is used to estimate the eye gaze. The proposed algorithm consists of the following steps: ...
... Assumption, that both points are in the same distance, means that the object is far enough from the camera and the depth difference can be omitted. Using the calculated position in the previous iteration, points go through projection 3D of the object [9]. ...
... It requires to establish the correct criteria, dependent on the method that is being used which specifies the occurrence of the feature that is being looked. The choice of the criterium's values is most often provided by the system's parameter, which should be then set up manually [9]. To be able to establish the direction, in which the eyes are pointed, it is necessary to precisely specify the center of the pupil. ...
In this paper, we proposed EREGE system, EREGE system considers as a face analysis package including face detection, eye detection, eye tracking, emotion recognition, and gaze estimation. EREGE system consists of two parts; facial emotion recognition that recognizes seven emotions such as neutral, happiness, sadness, anger, disgust, fear, and surprise. In the emotion recognition part, we have implemented an Active Shape Model (ASM) tracker which tracks 116 facial landmarks via webcam input. The tracked landmark points are used to extract face expression features. A support Vector machine (SVM) based classifier is implemented which gives rise to robust our system by recognizing seven emotions. The second part of EREGE system is the eye gaze estimation that starts by creating the head model followed by presenting both Active Shape Model (ASM) and Pose from Orthography and Scaling with Iterations (POSIT) algorithms for head tracking and position estimation.