Facial expression recognition based on diffeomorphic matching
ABSTRACT This paper presents a new framework for facial expression recognition based on diffeomorphic matching. First landmarks are selected based on a manual or automatic method. All of the landmarks from different images are registered to a reference landmark set using a rigid registration algorithm. The pair-wise geodesic distance between all sets of landmarks are then computed using diffeomorphic matching. Finally, a K-Nearest Neighbor classifier (KNN) is used to classify a query image using the geodesic distances. Both the classification and classical MultiDimensional Scaling results show that geodesic distance is more effective than Euclidean distance on capturing the face shape variation.
- SourceAvailable from: Laurent Younes[show abstract] [hide abstract]
ABSTRACT: This paper constructs metrics on the space of images I defined as orbits under group actions G. The groups studied include the finite dimensional matrix groups and their products, as well as the infinite dimensional diffeomorphisms examined in Trouv (1999, Quaterly of Applied Math.) and Dupuis et al. (1998). Quaterly of Applied Math. Left-invariant metrics are defined on the product G I thus allowing the generation of transformations of the background geometry as well as the image values. Examples of the application of such metrics are presented for rigid object matching with and without signature variation, curves and volume matching, and structural generation in which image values are changed supporting notions such as tissue creation in carrying one image to another.International Journal of Computer Vision 12/2000; 41(1):61-84. · 3.62 Impact Factor
Article: Toward practical smile detection.[show abstract] [hide abstract]
ABSTRACT: Machine learning approaches have produced some of the highest reported performances for facial expression recognition. However, to date, nearly all automatic facial expression recognition research has focused on optimizing performance on a few databases that were collected under controlled lighting conditions on a relatively small number of subjects. This paper explores whether current machine learning methods can be used to develop an expression recognition system that operates reliably in more realistic conditions. We explore the necessary characteristics of the training data set, image registration, feature representation, and machine learning algorithms. A new database, GENKI, is presented which contains pictures, photographed by the subjects themselves, from thousands of different people in many different real-world imaging conditions. Results suggest that human-level expression recognition accuracy in real-life illumination conditions is achievable with machine learning technology. However, the data sets currently used in the automatic expression recognition literature to evaluate progress may be overly constrained and could potentially lead research into locally optimal algorithmic solutions.IEEE Transactions on Software Engineering 11/2009; 31(11):2106-11. · 2.59 Impact Factor
- [show abstract] [hide abstract]
ABSTRACT: We describe a method of constructing parametric statistical models of shape variation which can generate continuous diffeomorphic (non-folding) deformation fields. Traditional statistical shape models are constructed by analysis of the positions of a set of landmark points. Here, we describe an algorithm which models parameters of continuous warp fields, constructed by composing simple parametric diffeomorphic warps. The warps are composed in such a way that the deformations are always defined in a reference frame. This allows the parameters controlling the deformations to be meaningfully compared from one example to another. A linear model is learnt to represent the variations in the warp parameters across the training set. This model can then be used to generalise the deformations. Models can be built either from sets of annotated points, or from unlabelled images. In the latter case, we use techniques from non-rigid registration to construct the warp fields deforming a reference image into each example. We describe the technique in detail and give examples of the resulting models.Image and Vision Computing 03/2008; 26(3):326–332. · 1.96 Impact Factor