This paper proposes an efficient method for warping facial features. The existing methods use points, which are standard for
facial features warping, without the reason or calculation. And existing methods have difficulties for facial feature warping.
We estimate the standard points by using BSM(Bayeian Shape Model). From the experiment results for the various image, the
proposed algorithm shows more natural results than the conventional algorithm and is more efficient than ASM(Active shape
[Show abstract][Hide abstract] ABSTRACT: A Bayesian Model (BM) is proposed in this paper for extracting facial features. In the BM, first the prior distri- bution of object shapes, which reflects the global shape variations of the object contour, is estimated from the sample data. This distribution is then utilized to con- strain and dynamically adjust the prototype contour in the matching procedure, in this way large or global shape deformations due to the variations of samples can be tol- erated. Moreover, a transformational invariant internal energy term is introduced to describe mainly the local shape deformations between the prototype contour in the shape domain and the deformable contour in the image domain, so that the proposed BM can match the objects undergoing not only global but also local variations. Ex- periment results based on real facial feature extraction demonstrate that the BM is more robust and insensitive to the positions, viewpoints, and deformations of object shapes, as compared to the Active Shape Model (ASM) algorithm.
[Show abstract][Hide abstract] ABSTRACT: Model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise, clutter, and occlusion. It is more problematic to apply model-based methods to images of objects whose appearance can vary, though a number of approaches based on the use of flexible templates have been proposed. The problem with existing methods is that they sacrifice model specificity in order to accommodate variability, thereby compromising robustness during image interpretation. We argue that a model should only be able to deform in ways characteristic of the class of objects it represents. We describe a method for building models by learning patterns of variability from a training set of correctly annotated images. These models can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes). The key difference is that our Active Shape Models can only deform to fit the data in ways consistent with the training set. We show several practical examples where we have built such models and used them to locate partially occluded objects in noisy, cluttered images.
[Show abstract][Hide abstract] ABSTRACT: Previous research has shown that facial motion can carry information about age, gender, emotion and, at least to some extent, identity. By combining recent computer animation techniques with psychophysical methods, we show that during the computation of identity the human face recognition system integrates both types of information: individual non-rigid facial motion and individual facial form. This has important implications for cognitive and neural models of face perception, which currently emphasize a separation between the processing of invariant aspects (facial form) and changeable aspects (facial motion) of faces.
Vision Research 09/2003; 43(18):1921-36. DOI:10.1016/S0042-6989(03)00236-0 · 1.82 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.