Article

Robust Facial Feature Tracking Using Shape-Constrained Multiresolution-Selected Linear Predictors

Centre for Vision, Speech & Signal Process., Univ. of Surrey, Guildford, UK
IEEE Transactions on Pattern Analysis and Machine Intelligence (Impact Factor: 5.69). 10/2011; 33(9):1844 - 1859. DOI: 10.1109/TPAMI.2010.205
Source: IEEE Xplore

ABSTRACT This paper proposes a learned data-driven approach for accurate, real-time tracking of facial features using only intensity information. The task of automatic facial feature tracking is nontrivial since the face is a highly deformable object with large textural variations and motion in certain regions. Existing works attempt to address these problems by either limiting themselves to tracking feature points with strong and unique visual cues (e.g., mouth and eye corners) or by incorporating a priori information that needs to be manually designed (e.g., selecting points for a shape model). The framework proposed here largely avoids the need for such restrictions by automatically identifying the optimal visual support required for tracking a single facial feature point. This automatic identification of the visual context required for tracking allows the proposed method to potentially track any point on the face. Tracking is achieved via linear predictors which provide a fast and effective method for mapping pixel intensities into tracked feature position displacements. Building upon the simplicity and strengths of linear predictors, a more robust biased linear predictor is introduced. Multiple linear predictors are then grouped into a rigid flock to further increase robustness. To improve tracking accuracy, a novel probabilistic selection method is used to identify relevant visual areas for tracking a feature point. These selected flocks are then combined into a hierarchical multiresolution LP model. Finally, we also exploit a simple shape constraint for correcting the occasional tracking failure of a minority of feature points. Experimental results show that this method performs more robustly and accurately than AAMs, with minimal training examples on example sequences that range from SD quality to Youtube quality. Additionally, an analysis of the visual support consistency across different subjects is also provided.

Download full-text

Full-text

Available from: Richard Bowden, Sep 01, 2015
0 Followers
 · 
139 Views
 · 
165 Downloads
  • Source
    • "Consequently, a growing number of face image-based applications have been developed and investigated. These include face detection (Zhang and Zhang 2010), alignment (Liu 2009), tracking (Ong and Bowden 2011), modeling (Tao et al. 2008), and recognition (Chellappa et al. 1995; Zhao et al. 2003) for security control, surveillance monitoring, authentication, biometrics, digital entertainment and rendered services for a legitimate user only, and age synthesis and estimation (Fu et al. 2010) for explosively emerging real-world applications such as forensic art, electronic customer relationship management , and cosmetology. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper comprehensively surveys the development of face hallucination (FH), including both face super-resolution and face sketch-photo synthesis techniques. Indeed, these two techniques share the same objective of inferring a target face image (e.g. high-resolution face image, face sketch and face photo) from a corresponding source input (e.g. low-resolution face image, face photo and face sketch). Considering the critical role of image interpretation in modern intelligent systems for authentication, surveillance, law enforcement, security control, and entertainment, FH has attracted growing attention in recent years. Existing FH methods can be grouped into four categories: Bayesian inference approaches, subspace learning approaches, a combination of Bayesian inference and subspace learning approaches, and sparse representation-based approaches. In spite of achieving a certain level of development, FH is limited in its success by complex application conditions such as variant illuminations, poses, or views. This paper provides a holistic understanding and deep insight into FH, and presents a comparative analysis of representative methods and promising future directions.
    International Journal of Computer Vision 09/2013; 106(1). DOI:10.1007/s11263-013-0645-9 · 3.53 Impact Factor
  • Source
    • "This FBT is accurate for simultaneous head and facial feature tracking but inherits the drawbacks of stereo vision and optical flow computation; Namely, this system is restricted to controlled illumination, it requires pre-calibration and it is sensitive to large variations in head pose and facial feature position. Instead, [3] proposes a statistical method based on a set of linear predictors modelling intensity information for accurate and real-time tracking of facial features. Active Shape Models (ASM) [4] are an alternative to FBT. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can also be used for eyelid and iris tracking, as well as 3D head pose, lips and eyebrows facial actions tracking. Furthermore, our approach applies an on-line learning of changes in the appearance of the tracked target. Hence, the prior training of appearance models, which usually requires a large amount of labeled facial images, is avoided. Moreover, the proposed method is built upon a hierarchical combination of three OABTs, which are optimized using a Levenberg–Marquardt Algorithm (LMA) enhanced with line-search procedures. This, in turn, makes the proposed method robust to changes in lighting conditions, occlusions and translucent textures, as evidenced by our experiments. Finally, the proposed method achieves head and facial actions tracking in real-time.
    Image and Vision Computing 04/2013; 31(4):322–340. DOI:10.1016/j.imavis.2013.02.001 · 1.58 Impact Factor
  • Source
    • "However, this assumption does not represent precise real-life applications since human does move a lot. Certain parts of the body such as mouth and eyes move unintentionally as reported by Jon and Bowden [87]. In surgery, internal organs also move unconsciously such as heart pumping, colons contraction and lungs expanding motion. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Telepointer is a powerful tool in the telemedicine system that enhances the effectiveness of long-distance communication. Telepointer has been tested in telemedicine, and has potential to a big influence in improving quality of health care, especially in the rural area. A telepointer system works by sending additional information in the form of gesture that can convey more accurate instruction or information. It leads to more effective communication, precise diagnosis, and better decision by means of discussion and consultation between the expert and the junior clinicians. However, there is no review paper yet on the state of the art of the telepointer in telemedicine. This paper is intended to give the readers an overview of recent advancement of telepointer technology as a support tool in telemedicine. There are four most popular modes of telepointer system, namely cursor, hand, laser and sketching pointer. The result shows that telepointer technology has a huge potential for wider acceptance in real life applications, there are needs for more improvement in the real time positioning accuracy. More results from actual test (real patient) need to be reported. We believe that by addressing these two issues, telepointer technology will be embraced widely by researchers and practitioners.
    BioMedical Engineering OnLine 03/2013; 12(1):21. DOI:10.1186/1475-925X-12-21 · 1.75 Impact Factor
Show more