Conference Paper

A Fast Globally Supervised Learning Algorithm for Gaussian Mixture Models.

Chinese Academy of Sciences, Peping, Beijing, China
DOI: 10.1007/3-540-45151-X_42 Conference: Web-Age Information Management, First International Conference, WAIM 2000, Shanghai, China, June 21-23, 2000, Proceedings
Source: DBLP


In this paper, a fast globally supervised learning algorithm for Gaussian Mixture Models based on the maximum relative entropy
(MRE) is proposed. To reduce the computation complexity in Gaussian component probability densities, the concept of quasi-Gaussian
probability density is used to compute the simplified probabilities. For four different learning algorithms such as the maximum
mutual information algorithm (MMI), the maximum likelihood estimation (MLE), the generalized probabilistic descent (GPD) and
the maximum relative entropy (MRE) algorithm, the random experiment approach is used to evaluate their performances. The experimental
results show that the MRE is a better alternative algorithm in accuracy and training speed compared with GPD, MMI and MLE.

5 Reads
  • Source
    • "It is important to note that the discriminative power of a cluster in the context of image classification is a function of its purity which depends on both the cluster uncertainty and the class uncertainty. Although several supervised or semi-supervised GM models have been proposed in various domains [22] [23] [24] [25] and in visual dictionary creation [12] [16] [26] [27], none of them addresses the problem of this two-fold uncertainty and none jointly optimizes generalization and discriminative abilities of clusters. To solve these limitations, we present in this paper a new dictionary learning method which optimizes a convex combination of the likelihood of the (labeled and unlabeled) training data and the purity of the clusters. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The creation of semantically relevant clusters is vital in bag-of-visual words models which are known to be very successful to achieve image classification tasks. Generally, unsupervised clustering algorithms, such as K-means, are employed to create such clusters from which visual dictionaries are deduced. K-means achieves a hard assignment by associating each image descriptor to the cluster with the nearest mean. By this way, the within-cluster sum of squares of distances is minimized. A limitation of this approach in the context of image classification is that it usually does not use any supervision that limits the discriminative power of the resulting visual words (typically the centroids of the clusters). More recently, some supervised dictionary creation methods based on both supervised information and data fitting were proposed leading to more discriminative visual words. But, none of them consider the uncertainty present at both image descriptor and cluster levels. In this paper, we propose a supervised learning algorithm based on a Gaussian mixture model which not only generalizes the K-means algorithm by allowing soft assignments, but also exploits supervised information to improve the discriminative power of the clusters. Technically, our algorithm aims at optimizing, using an EM-based approach, a convex combination of two criteria: the first one is unsupervised and based on the likelihood of the training data; the second is supervised and takes into account the purity of the clusters. We show on two well-known datasets that our method is able to create more relevant clusters by comparing its behavior with the state of the art dictionary creation methods.
    Full-text · Article · Feb 2012 · Pattern Recognition
  • Source
    • "There are many resources about Gaussian models and fast learning algorithms [1], [2], [3], [4], [5], [6], also about video object tracking [7], [8], [9]. But to our knowledge there is no paper using Gaussian models and online, real-time learning algorithms for analyzing the tracked video object's duration parameter for behavior analysis in surveillance systems. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Persistence of objects in scenes is an important parameter of video object tracking systems. From the analysis of objects' durations (of stay) we not only get how long they stay in the scene, but also precisely where the objects spend time. The video frame is therefore segmented into clusters, and objects which go through or stay there are assigned to that cluster. If we observe all objects in a time period we should get a model of object behavior with respect to duration for each cluster. Using the built model we try to find abnormal object behavior. To build a model of object's spatial duration from the video data we utilize Gaussians and fast learning algorithm for real time surveillance applications on embedded systems.
    Preview · Conference Paper · Jun 2009