Conference Paper

Automatic Face Replacement in Video Based on 2D Morphable Model

Hubei Key Lab. of Intell. Robot, Wuhan Inst. of Technol., Wuhan, China
DOI: 10.1109/ICPR.2010.551 Conference: Pattern Recognition (ICPR), 2010 20th International Conference on
Source: IEEE Xplore

ABSTRACT This paper presents an automatic face replacement approach in video based on 2D morphable model. Our approach includes three main modules: face alignment, face morph, and face fusion. Given a source image and target video, the Active Shape Models (ASM) is adopted to source image and target frames for face alignment. Then the source face shape is warped to match the target face shape by a 2D morphable model. The color and lighting of source face are adjusted to keep consistent with those of target face, and seamlessly blended in the target face. Our approach is fully automatic without user interference, and generates natural and realistic results.

114 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Using generic interpolation machinery based on solving Poisson equations, a variety of novel tools are introduced for seamless editing of image regions. The first set of tools permits the seamless importation of both opaque and transparent source image regions into a destination region. The second set is based on similar mathematical ideas and allows the user to modify the appearance of the image seamlessly, within a selected region. These changes can be arranged to affect the texture, the illumination, and the color of objects lying in the region, or to make tileable a rectangular selection.
    ACM Transactions on Graphics 07/2003; 22(3):313-318. DOI:10.1145/1201775.882269 · 4.10 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present a complete system for automatic face replacement in images. Our system uses a large library of face images created automatically by downloading images from the internet, extracting faces using face detection software, and aligning each extracted face to a common coordinate system. This library is constructed off-line, once, and can be efficiently accessed during face replacement. Our replacement algorithm has three main stages. First, given an input image, we detect all faces that are present, align them to the coordinate system used by our face library, and select candidate face images from our face library that are similar to the input face in appearance and pose. Second, we adjust the pose, lighting, and color of the candidate face images to match the appearance of those in the input image, and seamlessly blend in the results. Third, we rank the blended candidate replacements by computing a match distance over the overlap region. Our approach requires no 3D model, is fully automatic, and generates highly plausible results across a wide range of skin tones, lighting conditions, and viewpoints. We show how our approach can be used for a variety of applications including face de-identification and the creation of appealing group photographs from a set of images. We conclude with a user study that validates the high quality of our replacement results, and a discussion on the current limitations of our system.
    ACM Transactions on Graphics 01/2008; 27. DOI:10.1145/1399504.1360638 · 4.10 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise, clutter, and occlusion. It is more problematic to apply model-based methods to images of objects whose appearance can vary, though a number of approaches based on the use of flexible templates have been proposed. The problem with existing methods is that they sacrifice model specificity in order to accommodate variability, thereby compromising robustness during image interpretation. We argue that a model should only be able to deform in ways characteristic of the class of objects it represents. We describe a method for building models by learning patterns of variability from a training set of correctly annotated images. These models can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes). The key difference is that our Active Shape Models can only deform to fit the data in ways consistent with the training set. We show several practical examples where we have built such models and used them to locate partially occluded objects in noisy, cluttered images.
    Computer Vision and Image Understanding 01/1995; 61(1):38–59. DOI:10.1006/cviu.1995.1004 · 1.54 Impact Factor